Equivalent QC

If you are unfamiliar with equivalent QC, see references 1-2. This essay explores some issues with equivalent QC.

 Go to the AACC expert online access presentation

What causes errors in assays and the role of QC

The figure above shows four generic types of error and how QC can prevent two of tem. The four error types are explained:

random patient interference – is an interfering substance or mix of substances that causes a bias in the (patient) result and is often different (e.g., apparently random) in each patient specimen. The incorrect results are repeatable on re-assay. For some or many patient specimens, there may be no observed interferences. QC does not detect this error.

random bias – is any short term bias such as a clog in an analyzer that lasts for a few samples and is not specific to a particular patient specimen. The incorrect results are often not repeatable on re-assay (because the bias has disappeared). QC probably won’t detect this error since the probability of the error occurring during a QC sample is low. Note: The “clog in analyzer” is a case of an error that may be detected by an internal monitoring system. In this example, the error has not been detected by the internal monitoring system.

long-term bias – is any bias such as most calibration error that lasts for at least a day and is thus detected by routine quality control. The “clog in an analyzer” failure could also last for more than a day. This definition is somewhat arbitrary, since some calibration error is short term (e.g., blood gas systems are calibrated more frequently than once a day).

imprecision – are all biases that are very short term (occur in less time than 1 assay result and are modeled as random error), plus longer term uncompensated biases (for example drift). Note that the imprecision as typically measured in clinical chemistry assays is apparent random error, which means it is the true random error plus uncompensated biases such as drift. QC can detect poor imprecision.

The effect various QC Schemes on detecting these errors.

Error Source QC Scheme
Increased Current (2 per day) Reduced
Random patient interference No effect No effect No effect
Short term bias Catches more errors Catches fewer errors Catches even fewer errors
Long term bias No effect No effect Catches fewer errors1
Imprecision No effect No effect No effect

1For example, if a system is calibrated weekly, and there is calibration error, running QC monthly will frequently miss this error

Internal monitoring systems

The rationale behind the reduction is QC frequency is the assertion that internal monitoring systems adequately detect and prevent incorrect results from being reported. Here are some problems with that assertion.

Calibration is hard to control though internal monitoring – It is unlikely that any internal monitoring system can detect all calibration problems. The whole basis behind calibration is to associate an assay’s response (signal) with a known concentration. This sets up a calibration equation. Then, with each unknown (patient sample) the response that is found is assigned a concentration according to that equation.

Although there can be limits set on the expected calibrator’s response as well as checks on the shape of the response, there is no real way to prevent other errors and this can lead to calibration bias, which can be detected by QC.

Internal monitoring systems are models and can be wrong – An internal monitoring system is the result of a model of how the system can fail. These models are often based on fault trees and FMECAs. Mitigations are applied to detect and prevent errors through hardware and software. But there is no guarantee that either the model is correct (e.g., that all possible failure modes are included) or that the mitigations applied are 100% effective. In fact, experience has shown that assay development usually starts with a relatively large number of errors. Mitigations are repeatedly applied until a decision is made to release the product. Mitigations also are applied after product release. Of course, errors which affect patient results are classified as the most severe and are given the most attention. The process of repeatedly applying fixes (formally known as reliability growth management) is the most efficient way of developing complex instrument systems and is used because the required knowledge to “design things right the first time” doesn’t exist.

Another view of QC vs. internal monitoring systems

There is another fundamental difference between QC and internal monitoring systems. As stated above, internal monitoring systems are based on a model whereas QC is largely observational. Observation means that assuming that one has reasonable quality control rules, one does not require any knowledge about how the system can fail, one must only run QC. Putting things another way, you can forecast the weather through models (and these can be quite sophisticated) or you can go outside.  Or in terms of the equivalent QC issue, one could suggest that one should have the best internal monitoring systems possible and run QC to detect anything that was missed.

The problem with the validation protocol

The suggested validation protocol is 2 QC samples per day for 30 days. One failed QC that does not repeat is allowed. One can show that this proves with 95% confidence that the proportion of all QC failures is no more than 7.7% (see reference 3). This is “equivalent” – in Six Sigma terms – to a 2.9 sigma process. This is actually the best case because one is not really interested in the QC samples but in the patient samples.

Cost must always be considered

All of the above does not deal with cost. If cost did not enter into the equation, one would increase QC frequency, not decrease it. However, cost is important. Running QC samples adds cost. The more lab tests cost, the fewer people will be able to be tested and this lack of information will increase morbidity and mortality. Yet, if QC frequency is reduced, this may lead to more errors and also increase morbidity and mortality (also see reference 4).


The proposal to reduce QC implies that QC is redundant to internal monitoring systems. I have suggested why this might not be the case. The cost benefit tradeoff of equivalent QC must be addressed with data, and this doesn’t mean asking each lab to answer this question.


  1. http://www.cms.hhs.gov/CLIA/downloads/6066bk.pdf
  2. http://www.westgard.com/cliafinalrule7.htm
  3. Hahn GJ and Meeker WQ. Statistical intervals. A guide for practitioners. Wiley: New York, 1991, p. 104
  4. Krouwer JS. Assay Development and Evaluation: A Manufacturer’s Perspective., AACC Press, Washington DC, 2002, p6.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s

%d bloggers like this: