Does running more quality control lead to meeting quality goals? – not always

April 17, 2005

Westgard has commented that many assays don’t have sufficient quality as judged by analyzing proficiency testing data and that until better assays are provided, labs need to run more QC to improve quality and to meet quality requirements (1). His “OpSpecs” treatment is provided as his rationale (2). This essay addresses the assertion that if labs run more QC, quality will improve and quality requirements will be met.

Ideally, when a lab considers using an assay, the lab should evaluate the assay to determine if the assay meets the clinical quality requirements for that analyte. Of course, one can ask how many labs actually have clinical quality requirements but that’s another essay. In practice, many labs don’t conduct this type of evaluation and rely on the manufacturer’s claims – the lab evaluation (often driven by regulation) is a verification that the assay is working as the manufacturer intends.

The lab also establishes a quality control program for that analyte (also often driven by regulation). Although there are many ways to set this up, the purpose of quality control is to monitor the stability of the assay process. When something goes “bump in the night” the stability of the assay process has been lost. The quality control program should detect this and provide an alert to the lab that the assay is not in control. One can adjust the quality control process to be able to detect small departures from stability (e.g., often by running more quality control samples and by using different accept / reject rules).

Determining whether the assay process meets clinical quality requirements and is stable (in control) gives four possible outcomes as shown in Table 1.

Table 1 Outcomes for process capability and quality control tests

Case Assay meets clinical quality requirements? Quality control OK? Comment
1 Data passes Data passes Process is stable with medically acceptable results
2 Data passes Data fails Process is unstable but has medically acceptable results
3 Data fails Data passes Process is stable with medically unacceptable results
4 Data fails Data fails Process is unstable with medically acceptable results

Cases 2 and 3 in which the data have failed one of the two criteria are covered in more detail.

The unstable process that meets clinical quality requirements

In case 2, even though the assay is not in control (e.g., the process is no longer stable as detected by the quality control results), the assay results are still clinically acceptable. However, case 2 requires action because an unstable assay is unpredictable and the results could easily turn into case 4. So the cause for the loss of process stability must be found and corrected.

The stable process that doesn’t meet clinical quality requirements

Case 3 is an example of a process that is not “capable”. Even though assay results are clinically unacceptable, the process is in control. An example of this is troponin I, where evaluation of assay imprecision failed consensus European Society of Cardiology/American College of Cardiology (ESC-ACC) recommendations (3).

Increasing the number of quality control samples will not improve the quality of this assay so that it meets its quality requirements. Neither will changing the quality control goals to allow for detection of smaller departures from a stable process, since even if tiny departures in imprecision stability are detected (and corrected), the inherent imprecision is inadequate. It would also be a mistake to make the quality control limits narrower. One would get frequent alerts but there would be nothing to fix (from a quality control standpoint). Making adjustments to a process that is not capable is known as “chasing after the noise.” There are at least two possible solutions for this troponin I case: redesign the assay (manufacturer solution), or run replicates to improve imprecision (lab solution).

Similar cases can be made when bias is the issue, but things get a lot more complicated. First, consider what happens when a manufacturer releases a new method. By means of various traceability studies, the originally released method often has no bias. This situation rarely persists for a variety of reasons. So often the lab is running a method with some bias in it. Now the lab is probably running an unassayed control, with peer group assigned targets. This process also has some error – after all, this is the average of a bunch of biased methods. So after all of this, say the lab method has a 1% bias for a sodium method but is very close to their QC target. If the inherent imprecision is 1.0% then 95% of the errors will fall within -1% and +3% for a stable process. Now if the true mean is 140 mmol/L, and the required quality is that 95% of errors must be less than ± 4 mmol/L, then this means that this assay is not capable and no QC program can help this assay meet clinical quality requirements. Note that if the inherent imprecision is 1% then the allowed bias must be less than 0.86% (at 140).

For either of these cases, one could always argue that running more QC always improves quality since running more QC allows one to detect and correct small losses in process stability and thus quality is improved even it the resulting quality falls short of clinical quality requirements. However, it would be hard to justify the increased expense for small quality improvements. Moreover, the claim that the increased QC would lead to meeting quality goals is untrue.

Would running more QC ever lead to improved quality?

There are of course cases where running more QC will lead to improved quality such that quality requirements will be met. If one reconsiders the above sodium case but with a bias of 0.5%, then detection of a relatively small process shift is required, which implies quality control rules that require more QC. Of course, one could also argue that assays that are close to being not capable need to be improved by manufacturers and that running more QC is a less cost effective solution.

One could also envision cases where running more QC (e.g., more than two samples per day) would lead to improved quality. For example, if an assay suffered from frequent special cause failures (e.g., it went bump in the night several times a day), then running more QC would likely detect these failures. Hence, in setting up a quality control program, one must take into account not just the size of the departure from stability but the frequency as well (see equivalent QC essay).

But …

If a quality control sample triggers an alert, the process may or may not have lost stability. Just as a medical test, quality control rules suffer from false positives and false negatives so an alert can be a false positive. Whether the alert is real or not, detection by itself does not improve quality. What is required is either of the following cases, which are a detection and recovery scheme.

  1. The special cause that triggered the alert is found and corrected and the patient samples are rerun and quality control is acceptable
  2. The special cause that triggered the alert is not found – the patient samples are rerun and quality control is acceptable, which implies that the special cause is no longer present

Evaluating clinical quality requirements

A final caution – one cannot guarantee that clinical quality requirements will be met by only running control samples (or proficiency samples). One must run patient samples in order to evaluate random patient interference effects (also see equivalent QC essay) although this is usually a one-time evaluation. Moreover, many quality control programs deal with 95% assurance, but this leaves the possibility for 50,000 defects per million, which is close to a 3.1 sigma process (see six sigma II essay).

Conclusion

Running more QC may help a lab achieve clinically required quality for an assay when the inherent quality of the assay starts to approach the required clinical quality or when there are special cause problems that occur frequently. For assays that are not capable, or for quality problems not detectable by QC, such as random patient interferences, running more QC will not help a lab achieve clinically required quality for an assay – other solutions are needed.

References

  1. Presentation at CLSI meeting on equivalent QC, March 18, 2005.
  2. Westgard JO, Petersen PH, Wiebe DA. Laboratory process specifications for assuring quality in the U.S. National Cholesterol Education Program Clin. Chem 1991;37:656 – 661.
  3. Panteghini M, Pagani F, Yeo KJ, Apple FS, Christenson RH, et. al. Evaluation of Imprecision for Cardiac Troponin Assays at Low-Range Concentrations Clin. Chem 2004;50:327 – 332
Advertisements