Beware of Equivalent Quality Control

April 20, 2007

I attended the 2007 CLSI forum, which took place on April 20th in Baltimore. I got to speak about Evaluation Protocols. There were several highlights of the conference, one of which was to hear Neill Carey speak about Evaluation Protocols. His presentation is so clear that it is easy to see why his workshop at the AACC annual meeting is so popular. Another highlight was to hear Marina Kondratovich and Kristen Meier, both from the FDA, describe some of the statistical inadequacies that they encounter when reviewing FDA submissions.

I was also struck by the presentation of Judy Yost from CMS. She gave an update about equivalent (or alternative) quality control (EQC), which is a program which allows clinical laboratories under certain circumstances to reduce the frequency of external quality control to once a month. I presented an AACC session to show that this policy is not supported scientifically.

Ms. Yost ignored anything that I and others have said regarding EQC and went on to say that EQC has been a success story for clinical laboratories that have been using it, meaning that the inspection process for these laboratories has not uncovered any problems related to the reduced frequency of external quality control. This makes me think of an analogy. If someone took out the airbags in their car and stopped wearing seat belts and didn’t get into an accident, they might claim that one doesn’t need airbags or seatbelts because they have had no injuries without their use.

Changing the frequency of external quality control changes the risk of adverse events. Ms. Yost’s assertion that EQC is working in clinical laboratories that are using it because of successful inspections does not inform about the change in risk. Jim Westgard got up and questioned Ms. Yost about the lack of scientific basis of EQC. Okay, I have some differences with some of Jim Westgard’s writings, but not only am I in agreement with him on this issue, I applaud him for getting up and asking these questions. It’s the right thing to do and demonstrates leadership.

Two years ago, a new CLSI subcommittee was formed to provide a scientific basis for manufacturer’s recommendations for external quality control. Whereas the work of this subcommittee is still in progress, the scope of the subcommittee has been changed – it will not provide guidance for manufacturer’s recommendations for external quality control. This means that there will continue to be no scientific basis for EQC.

CLSI has many valuable Evaluation Protocol standards about analytical performance. They have an opportunity to develop and promote standards in risk management. These are sorely needed.


Not good advice on how to conduct FMEA

April 14, 2007

I had occasion to view a presentation about FMEA, presented at the 2006 CLSI forum. It may be viewed at There are some serious issues with this advice on how to perform a FMEA, which can be summarized as follows.

Detection is listed as an item to be classified (added to severity and probability of occurrence). I have advised against this previously.

The RPN (risk priority number) is examined after mitigations have been put in place. See this essay, as to why this can cause problems.

And perhaps worst of all, patient safety events and potential non patient safety events are in the same classification scheme. For example,  10 = injury or death, 9 = regulatory non compliance. This means that in a Pareto chart, one could be worrying about documentation issues more so than killing someone – sorry but that’s a fact.

Severity = 10, probability of occurrence = 1, detection = 5, RPN = 50 Severity = 9, probability of occurrence = 8, detection = 5, RPN = 360

I covered this in detail in my book, Managing risk in hospitals.


You get what you ask for

April 13, 2007

I have written before about the difference between horizontal and vertical standards. ISO/TC212 produces standards for the clinical laboratory. The following came from a talk by Dr. Stinshoff, who has headed the ISO/TC212 effort. The red highlights are from Dr. Stinshoff.

“ISO/TC 212 Strategies:

– Select new projects using the breadth and depth of the expertise gathered in ISO/TC 212; focus on horizontal standards; address topics that are generally applicable to all IVD devices; and, limit the activities of ISO/TC 212 to a level that corresponds to the resources that are available (time and funds of the delegates).

– Assign high preference to standards for developed technologies; assign high preference to performance-oriented standards; take the potential cost of implementation of a standard into consideration; and, solicit New Work Item ideas only according to perceived needs, which should be fully explained and supported by evidence.

– Globalize regional standards that have a global impact”


What is meant by performance-oriented standards? “ISO Standardisation Performance vs. Prescriptive Standards:

Whenever possible, requirements shall be expressed in terms of performance rather than design or descriptive characteristics. This approach leaves maximum freedom to technical development….

(Excerpt of Clause 4.2, ISO/IEC Directives, Part 2, 2004)”

So one reason ISO/TC212 produces horizontal standards is because that is their strategy.


European and US clinical laboratory quality

April 5, 2007

I am somewhat skeptical about the statement in a recent Westgard essay which suggests that Europeans  who use ISO 15189 to help with accreditation are more likely to improve quality in their laboratories than US laboratories, who just try to meet minimum CLIA standards. ISO 15189 is much like ISO 9001, which  is used for businesses. I have previously written that ISO 9001 certification plays no role in improving quality for diagnostic companies (1). As an example of ISO 15189 guidance – albeit in the version I have which is from 2002 – under the section “Resolution of complaints”, ISO 15189 says the laboratory should have a policy and procedures for the resolution of complaints. In ISO 17025, which is a similar standard, virtually the identical passage occurs.

Westgard mentions that clinical laboratories need a way to estimate uncertainty that is more practical than the ISO GUM standard and mentions a CLSI subcommittee which is working on this. A more practical way will be unlikely. I was on that subcommittee. I didn’t want to participate at first, since I don’t agree that clinical laboratories should estimate uncertainty according to GUM (2). However, the chair holder wanted me for my contrarian stance, so I joined. I must say that I enjoyed being on the subcommittee, which had a lot of smart people and an open dialog. However, I was unable to convince anyone of my point of view and therefore resigned, because it would make no sense to be both an author of this document and reference 2. The last version of this document I saw was 80 pages long (half of it an Appendix) with many equations. This version will not be understood by most (any?) clinical laboratories. However, there is a CLSI document that allows one to estimate uncertainty intervals easily, EP21A, although not according to GUM.

What is needed to improve clinical laboratory quality anywhere? Policies that emphasize measuring error rates such as FRACAS (3).


  1. Krouwer JS: ISO 9001 has had no effect on quality in the in-vitro medical diagnostics industry. Accred. Qual. Assur. 2004;9:39-43.
  2. Krouwer JS: A Critique of the GUM Method of Estimating and Reporting Uncertainty in Diagnostic Assays Clin. Chem., 49:1818 -1821 (2003).
  3. Krouwer JS: Using a Learning Curve Approach to Reduce Laboratory Error, Accred. Qual. Assur., 7: 461-467 (2002).