March 11, 2018
Readers of this blog know that I’m in favor of specifications that account for 100% of the results. The danger of specifications that are for 95% or 99% of the results is that errors can occur that cause serious patient harm for assays that meet specifications! Large and harmful errors are rare and certainly less than 1%. But hospitals might not want specifications that account for 100% of results (and remember that hospital clinical chemists populate standards committees). A potential reason is that if a large error occurs, the 95% or 99% specification can be an advantage for a hospital if there is a lawsuit.
I’m thinking of an example where I was an expert witness. Of course, I can’t go into the details but this was a case where there was a large error, the patient was harmed, and the hospital lab was clearly at fault. (In this case it was a user error). The hospital lab’s defense was that they followed all procedures and met all standards, e.g., sorry but stuff happens.
As for irrelevant statistics, I’ve heard two well-known people in the area of diabetes (Dr. David B Sachs and Dr. Andreas Pfützner) say in public meetings that one should not specify glucose meter performance for 100% of the results because one can never prove that the number of large errors is zero.
That one can never prove that the number of large errors is zero is true but this does not mean one should abandon a specification for 100% of the results.
Here, I’m reminded of blood gas. For blood gas, obtaining a result is critical. Hospital labs realize that blood gas instruments can break down and fail to produce a result. Since this is unacceptable, one can calculate the failure rate and reduce the risk of no result with redundancy (meaning using multiple instruments). No matter how many instruments are used, the possibility that all instruments will fail at the same time is not zero!
A final problem with not specifying 100% of the results is that it may cause labs to not put that much thought into procedures to minimize the risk of large errors.
And in industry (at least at Ciba-Corning) we always had specifications for 100% of the results, as did the original version of the CLSI total error document, EP21-A (this was dropped in the A2 version).
February 24, 2018
A few blog entries ago, I described a case when calculating the SD did not provide an estimate of random error because the observations contained drift.
Any time that data analysis is used to estimate a parameter, there are usually a set of assumptions that must be checked to ensure that the parameter estimate will be valid. In the case of estimating random error from a set of observations from the same sample, an assumption is that the errors are IIDN, which means that the observations are independently and identically distributed in a normal distribution with mean zero and variance sigma squared. This can be checked visually by examining a plot of the observations vs. time, the distribution of the residuals, the residuals vs. time, or any other plot that makes sense.
The model is: Yi = ηi + εi and the residuals are simply YiPredicted – Yi
February 18, 2018
To recall, total analytical error was proposed by Westgard in 1974. It made a lot of sense to me and I proposed to CLSI that a total analytical error standard should be written. This proposal was approved and I formed a subcommittee which I chaired and in 2003, the CLSI standard EP21-A, which is about total analytical error was published.
When it was time to revise the standard – all standards are considered for revision – I realized that the standard had some flaws. Although the original Westgard article was specific to total analytical error, it seemed that to a clinician, any error that contributed to the final result was important regardless of its source. And for me, who often worked in blood gas evaluations, user error was an important contribution to total error.
Hence, I suggested the revision to be about total error, not total analytical error and EP21-A2 drafts had total error in the title. There were some people within the subcommittee and particularly one or two people not on the subcommittee but in CLSI management, who hated the idea, threw me off my own subcommittee and ultimately out of CLSI.
But recently (in 2018) a total error task force published an article which contained the statement, to which I have previously referred:
“Lately, efforts have been made to expand the TAE concept to the evaluation of results of patient samples, including all phases of the total testing process.” (I put in the bolding).
Hence, I’m hoping that the next revision, EP21-A3 will be about total error, not total analytical error.
April 11, 2017
For almost all of my career, I’ve been working to determine performance specifications for assays, including the protocol and data analysis methods to see if performance has been met. This work has been performed mainly for companies but occasionally also for standards groups. There are some big differences.
Within a company, the specifications are very important:
If the product is released too soon, before the required performance has been met, the product may be recalled, patients may suffer harm, and overall the company may suffer financially.
If the product is released too late, the company will definitely suffer financially as “time to market” has been shown in financial models to be a key success factor in achieving profit goals.
Company specifications are built around two main factors – what performance is competitive and how can the company be sure that no patients will be harmed. In my experience this has simply led to two goals – 95% of the differences between the company assay and reference should be within limits which guarantee a competitive assay and no differences should be large enough to cause patient harm (a clinical standard).
Standards groups seem to have a different outlook. Without being overly cynical, the standards adopted are often to guarantee that no company’s assay will fail the specification. Thus, 95% of differences between the assay and reference should be within these limits. There is almost never a mention about larger errors which may cause patient harm.
Thus, it is somewhat ironic that company specifications are usually more difficult to achieve then specifications published by the standards organizations.
March 12, 2017
Looking at my blog stats, I see that a lot of people are reading the total analytical error vs. total error post. So, below are the slides from a talk that I gave at a conference in Antwerp in 2016 called The “total” in total error. The slides have been updated. Because it is a talk, the slides are not as effective as the talk.
November 15, 2016
Recently, I alerted readers to the fact that the updated FDA POCT glucose meter standard no longer specifies 100% of the results.
So I submitted a letter to the editor to the Journal of Diabetes Science and Technology.
This letter has been accepted – It seemed to take a long time for the editors to decide about my letter. I can think of several possible reasons:
- I was just impatient – the time to reach a decision was average
- The editors were exceptionally busy due to their annual conference which just took place.
- By waiting until the conference, the editors could ask the FDA if they wanted to respond to my letter.
I’m hoping that #3 is the reason so I can understand why the FDA changed things.
November 8, 2016
The graph below shows the Parkes error grid in blue. Each zone in the Parkes error grid shows increasing patient harm with the innermost zone A having no harm. The zones (unlabeled) start with A (innermost) and go to D or E.
The red lines are the POCT 12-A3 standard. The innermost line should contain 95% of the results. Since no more than 2% can be outside of the outermost red lines, these outermost red lines should contain 98% of the data.
The red lines correspond roughly with the A zone of the Parkes error grid – the region of no patient harm.
Of course the problem is that in the CLSI guideline, 2% of the results are allowed to occur in the higher zones of the Parkes error grid corresponding to severe patent harm.