May 24, 2018
I have critiqued how results are presented in the previous version of EP7, where an example is given that if an interference is found to be less than 10% (also implied as less than whatever goal is chosen), the substance can be said not to interfere.
This is in Section 9 of the 2nd Edition. I am curious if this problem is in the 3rd edition but not curious enough to buy the standard.
May 13, 2018
I have discussed some shortcomings about how interferences are handled. This reminded me of something that I and my coworker published a number of years ago (1).
The origin of this publication came from Dr. Stan Bauer at Technicon Instruments. He was a pathologist with a passion for statistics. He had hired Cuthbert Daniel, a well-known consulting statistician who developed a protocol for the SMA analyzer. This was a nine sample long run of three concentration levels that provided an estimate of precision, proportional and constant bias, sample carryover, linear drift, and nonlinearity. The reason that the protocol worked was the choice of the sample order provided by Cuthbert Daniel.
In 1985, I chose to make a CLSI standard out of the protocol – EP10. It is now in version A3 AMD. (I have no idea what the AMD means).
The protocol could be extended to provide even more information by adding a candidate interfering substance to up to all three concentration levels. Since each level is repeated three times, the interference is added to only one replicate. Using multiple regression, one can now estimate 8 parameters – whereby in addition to the original parameters, the bias (if any) for each of the three interfering substances.
Now one run is virtually useless, but at Ciba Corning, we ran these protocols repeatedly during the development of an assay, so that with multiple runs, if a substance interfered, it would be detected.
Krouwer JS and Monti KL: A Modification of EP10 to Include Interference Screening,. Clin. Chem., 41, 325-6 (1995).
May 10, 2018
I have recently suggested that the CLSI EP7 standard causes problems (1). Basically, EP7 says that if an interfering substance results in an interference less than the goal (commonly set at 10%), then the substance can be reported not to interfere. Of course, this makes no sense. If a substances interferes at a level less than 10%, it still interferes!
Here’s a real example from the literature (2). Lorenz and coworkers say “substances frequently reported to interfere with enzymatic, electrochemical-based transcutaneous CGM systems, such as acetaminophen and ascorbic acid, did not affect Eversense readings.”
Yet in their table of interference results they show:
at 74 mg/dL of glucose, interference from 3 mg/dL of acetaminophen is -8.7 mg/dL
at 77 mg/dL of glucose, interference from 2 mg/dL of ascorbic acid is 7.7 mg/dL
- Krouwer, J.S. Accred Qual Assur (2018). https://doi.org/10.1007/s00769-018-1315-y
- Lorenz C., Sandoval W, and Mortellaro M. Interference Assessment of Various Endogenous and Exogenous Substances on the Performance of the Eversense Long-Term Implantable Continuous Glucose Monitoring System. DIABETES TECHNOLOGY & THERAPEUTICS Volume 20, Number 5, 2018 Mary Ann Liebert, Inc. DOI: 10.1089/dia.2018.0028.
April 20, 2018
My article “Interferences, a neglected error source for clinical assays” has been published. This article may be viewed using the following link https://rdcu.be/L6O2
March 11, 2018
Readers of this blog know that I’m in favor of specifications that account for 100% of the results. The danger of specifications that are for 95% or 99% of the results is that errors can occur that cause serious patient harm for assays that meet specifications! Large and harmful errors are rare and certainly less than 1%. But hospitals might not want specifications that account for 100% of results (and remember that hospital clinical chemists populate standards committees). A potential reason is that if a large error occurs, the 95% or 99% specification can be an advantage for a hospital if there is a lawsuit.
I’m thinking of an example where I was an expert witness. Of course, I can’t go into the details but this was a case where there was a large error, the patient was harmed, and the hospital lab was clearly at fault. (In this case it was a user error). The hospital lab’s defense was that they followed all procedures and met all standards, e.g., sorry but stuff happens.
As for irrelevant statistics, I’ve heard two well-known people in the area of diabetes (Dr. David B Sachs and Dr. Andreas Pfützner) say in public meetings that one should not specify glucose meter performance for 100% of the results because one can never prove that the number of large errors is zero.
That one can never prove that the number of large errors is zero is true but this does not mean one should abandon a specification for 100% of the results.
Here, I’m reminded of blood gas. For blood gas, obtaining a result is critical. Hospital labs realize that blood gas instruments can break down and fail to produce a result. Since this is unacceptable, one can calculate the failure rate and reduce the risk of no result with redundancy (meaning using multiple instruments). No matter how many instruments are used, the possibility that all instruments will fail at the same time is not zero!
A final problem with not specifying 100% of the results is that it may cause labs to not put that much thought into procedures to minimize the risk of large errors.
And in industry (at least at Ciba-Corning) we always had specifications for 100% of the results, as did the original version of the CLSI total error document, EP21-A (this was dropped in the A2 version).
February 24, 2018
A few blog entries ago, I described a case when calculating the SD did not provide an estimate of random error because the observations contained drift.
Any time that data analysis is used to estimate a parameter, there are usually a set of assumptions that must be checked to ensure that the parameter estimate will be valid. In the case of estimating random error from a set of observations from the same sample, an assumption is that the errors are IIDN, which means that the observations are independently and identically distributed in a normal distribution with mean zero and variance sigma squared. This can be checked visually by examining a plot of the observations vs. time, the distribution of the residuals, the residuals vs. time, or any other plot that makes sense.
The model is: Yi = ηi + εi and the residuals are simply YiPredicted – Yi
February 18, 2018
To recall, total analytical error was proposed by Westgard in 1974. It made a lot of sense to me and I proposed to CLSI that a total analytical error standard should be written. This proposal was approved and I formed a subcommittee which I chaired and in 2003, the CLSI standard EP21-A, which is about total analytical error was published.
When it was time to revise the standard – all standards are considered for revision – I realized that the standard had some flaws. Although the original Westgard article was specific to total analytical error, it seemed that to a clinician, any error that contributed to the final result was important regardless of its source. And for me, who often worked in blood gas evaluations, user error was an important contribution to total error.
Hence, I suggested the revision to be about total error, not total analytical error and EP21-A2 drafts had total error in the title. There were some people within the subcommittee and particularly one or two people not on the subcommittee but in CLSI management, who hated the idea, threw me off my own subcommittee and ultimately out of CLSI.
But recently (in 2018) a total error task force published an article which contained the statement, to which I have previously referred:
“Lately, efforts have been made to expand the TAE concept to the evaluation of results of patient samples, including all phases of the total testing process.” (I put in the bolding).
Hence, I’m hoping that the next revision, EP21-A3 will be about total error, not total analytical error.