March 11, 2018
Readers of this blog know that I’m in favor of specifications that account for 100% of the results. The danger of specifications that are for 95% or 99% of the results is that errors can occur that cause serious patient harm for assays that meet specifications! Large and harmful errors are rare and certainly less than 1%. But hospitals might not want specifications that account for 100% of results (and remember that hospital clinical chemists populate standards committees). A potential reason is that if a large error occurs, the 95% or 99% specification can be an advantage for a hospital if there is a lawsuit.
I’m thinking of an example where I was an expert witness. Of course, I can’t go into the details but this was a case where there was a large error, the patient was harmed, and the hospital lab was clearly at fault. (In this case it was a user error). The hospital lab’s defense was that they followed all procedures and met all standards, e.g., sorry but stuff happens.
As for irrelevant statistics, I’ve heard two well-known people in the area of diabetes (Dr. David B Sachs and Dr. Andreas Pfützner) say in public meetings that one should not specify glucose meter performance for 100% of the results because one can never prove that the number of large errors is zero.
That one can never prove that the number of large errors is zero is true but this does not mean one should abandon a specification for 100% of the results.
Here, I’m reminded of blood gas. For blood gas, obtaining a result is critical. Hospital labs realize that blood gas instruments can break down and fail to produce a result. Since this is unacceptable, one can calculate the failure rate and reduce the risk of no result with redundancy (meaning using multiple instruments). No matter how many instruments are used, the possibility that all instruments will fail at the same time is not zero!
A final problem with not specifying 100% of the results is that it may cause labs to not put that much thought into procedures to minimize the risk of large errors.
And in industry (at least at Ciba-Corning) we always had specifications for 100% of the results, as did the original version of the CLSI total error document, EP21-A (this was dropped in the A2 version).
January 25, 2018
Anyone who has even briefly ventured into the realm of statistics has seen the standard setup. One states a hypothesis, plans a protocol, collects and analyzes data and finally concludes that the hypothesis is true or false.
Yet a typical lab medicine evaluation will state the importance of the assay, present data about precision, bias, and other parameters and then launch into a discussion.
What’s missing is the hypothesis, or in terms that we used in industry – the specifications. For example, assay A should have a CV of 5% or less in the range of XX to YY. After data analysis, the conclusion is that assay A met (or didn’t meet) the precision specification.
These specifications are rarely if ever present in evaluation publications. Try to find a specification the next time you read an evaluation paper. And without specifications, there are usually no meaningful conclusions.
January 17, 2018
Doing my infrequent journal scan, I came across the following paper – “The use of error and uncertainty methods in the medical laboratory” available here. Ok, another sentence floored me (it’s in the abstract)… “Performance specifications for diagnostic tests should include the diagnostic uncertainty of the entire testing process.” It’s a little hard to understand what “diagnostic uncertainty” means. The sentence would be clearer if it read Performance specifications for diagnostic tests should include the entire testing process. But isn’t this obvious? Does this need to be stated as a principle in 2018?
August 19, 2017
A recent article suggests the CDC limit for blood lead may be lowered again. The logic for this is to base the limit on the 97.5th percentile of NHANES data, and to revisit the limit every 4 years. An article in Pediatrics has the details. Basically, the 97.5th percentile for blood lead has been decreasing – it was around 7 in 2000. And in the Pediatrics article it is stated that: “No safe blood lead concentration in children has been identified.” Nor has human physiology changed!
It’s hard to understand the logic behind the limit. If a child had a blood lead of 6 in 2011, the child was ok according to the CDC standard, but not ok in 2013. Similarly, a blood lead of 4 in 2016 was ok but not in 2017?
Here is a summary of lead standards in the USA through time.
1991 10 ug/dL
2012 5 ug/dL
2017? 3.48 ug/dL
May 26, 2017
I started this blog 13 years ago in March 2004 – the first two articles are about six sigma, here and here. The blog entry being posted now is my 344th blog entry.
Although the blog has an eclectic range of topics, one unifying theme for many entries is specifications, how to set them and how to evaluate them.
A few years ago, I was working on a hematology analyzer, which has a multitude of reported parameters. The company was evaluating parameters with the usual means of precision studies and accuracy using regression. I asked them:
- a) what are the limits that, when differences from reference are contained within these limits, will ensure that no wrong medical decisions would be made based on the reported result (resulting in patient harm) and
- b) what are the (wider) limits that, when differences from reference are contained within these limits, will ensure that no wrong medical decisions would be made based on the reported result (resulting in severe patient harm)
This was a way of asking for an error grid for each parameter. I believe, then and now, that constructing an error grid is the best way to set specifications for any assay.
As an example about the importance of specifications there was a case for which I was an expert witness whereby the lab had produced an incorrect result that led to patent harm. The lab’s defense was that they had followed all procedures. Thus, as long as they as followed procedures, they were not to blame. But procedures, which contain specifications, are not always adequate. As an example, remember the CMS program “equivalent quality control”?
April 13, 2007
I have written before about the difference between horizontal and vertical standards. ISO/TC212 produces standards for the clinical laboratory. The following came from a talk by Dr. Stinshoff, who has headed the ISO/TC212 effort. The red highlights are from Dr. Stinshoff.
“ISO/TC 212 Strategies:
– Select new projects using the breadth and depth of the expertise gathered in ISO/TC 212; focus on horizontal standards; address topics that are generally applicable to all IVD devices; and, limit the activities of ISO/TC 212 to a level that corresponds to the resources that are available (time and funds of the delegates).
– Assign high preference to standards for developed technologies; assign high preference to performance-oriented standards; take the potential cost of implementation of a standard into consideration; and, solicit New Work Item ideas only according to perceived needs, which should be fully explained and supported by evidence.
– Globalize regional standards that have a global impact”
What is meant by performance-oriented standards? “ISO Standardisation Performance vs. Prescriptive Standards:
Whenever possible, requirements shall be expressed in terms of performance rather than design or descriptive characteristics. This approach leaves maximum freedom to technical development….
(Excerpt of Clause 4.2, ISO/IEC Directives, Part 2, 2004)”
So one reason ISO/TC212 produces horizontal standards is because that is their strategy.
April 5, 2007
I am somewhat skeptical about the statement in a recent Westgard essay which suggests that Europeans who use ISO 15189 to help with accreditation are more likely to improve quality in their laboratories than US laboratories, who just try to meet minimum CLIA standards. ISO 15189 is much like ISO 9001, which is used for businesses. I have previously written that ISO 9001 certification plays no role in improving quality for diagnostic companies (1). As an example of ISO 15189 guidance – albeit in the version I have which is from 2002 – under the section “Resolution of complaints”, ISO 15189 says the laboratory should have a policy and procedures for the resolution of complaints. In ISO 17025, which is a similar standard, virtually the identical passage occurs.
Westgard mentions that clinical laboratories need a way to estimate uncertainty that is more practical than the ISO GUM standard and mentions a CLSI subcommittee which is working on this. A more practical way will be unlikely. I was on that subcommittee. I didn’t want to participate at first, since I don’t agree that clinical laboratories should estimate uncertainty according to GUM (2). However, the chair holder wanted me for my contrarian stance, so I joined. I must say that I enjoyed being on the subcommittee, which had a lot of smart people and an open dialog. However, I was unable to convince anyone of my point of view and therefore resigned, because it would make no sense to be both an author of this document and reference 2. The last version of this document I saw was 80 pages long (half of it an Appendix) with many equations. This version will not be understood by most (any?) clinical laboratories. However, there is a CLSI document that allows one to estimate uncertainty intervals easily, EP21A, although not according to GUM.
What is needed to improve clinical laboratory quality anywhere? Policies that emphasize measuring error rates such as FRACAS (3).
- Krouwer JS: ISO 9001 has had no effect on quality in the in-vitro medical diagnostics industry. Accred. Qual. Assur. 2004;9:39-43.
- Krouwer JS: A Critique of the GUM Method of Estimating and Reporting Uncertainty in Diagnostic Assays Clin. Chem., 49:1818 -1821 (2003).
- Krouwer JS: Using a Learning Curve Approach to Reduce Laboratory Error, Accred. Qual. Assur., 7: 461-467 (2002).