Frequency of QC in the clinical laboratory

December 9, 2007

Kent Dooley has written an interesting essay, which is here. One of the points he makes is that not all clinical laboratory errors result in patient harm because clinicians will not always act on the erroneous result. So if an assay result doesn’t agree with other clinical data, the clinician may suspect the result might be wrong and ask to have it repeated. Dooley suggests that the minimum QC frequency should follow the time course for the likelihood of a clinician requesting a repeat sample, so that upon repeat, if the result had been in error, the new result will be correct (because now QC has been run).

Now, I am unencumbered by the knowledge and experience of working in a lab but my view of things is somewhat different. It seems to me that there are several error/detection/recovery possibilities as shown in the figure below. (Note, better pictures are here).

The problem of waiting for a clinician (of for that matter a patient) to question a result, before running QC is that it doesn’t take advantage of the purpose of QC, which is shown below.

That is, one runs the assay and at some time QC. If the QC is ok, then the results are released to the clinician. If not, one troubleshoots the assay including possibly rerunning patient samples. Using this scheme, QC frequency should not be determined by a retest time course but rather by the turn-around-time requirement for the assay.

 

Now if the clinician requests a the assay to be repeated, and QC had already been run, it is unlikely that running a second QC will detect anything. QC has limitations in its ability to detect error (see figure below). Random biases and random patient interferences will not be detected by QC.

This figure came from previous considerations about equivalent QC, which are here, and here.

Besides suspecting assay error, many assay results are repeated because a condition is being monitored. Delta checks are a type of QC that is performed on these samples to determine whether the difference between results is expected. Exactly how the clinical laboratory could act on the knowledge that the clinician suspects that something is wrong with the assay result is a topic for clinical laboratorians to answer.


Central lines and FRACAS

December 7, 2007

One hears of FRACAS success stories (like the one below) and FMEA failure stories (like the wrong blood type organs transplanted at Duke). A reason one doesn’t hear of FMEA success stories is that to say that something that has never happened is now even less likely to happen (due to FMEA) just isn’t too exciting. FMEA success stories are often not cases of FMEA, they are FRACAS, since rate improvements are discussed. FRACAS failures – we tried something, it didn’t work – are not very interesting.

A recent article in The New Yorker (1) provides an example of a FRACAS success story.

In the article, there is no mention of FRACAS but many of the steps were followed. The issue was a too frequent infection rate in central lines. It is important that one can measure this rate. One knows how many central lines are used, infections manifest themselves and their cause can be determined by culturing the lines. Some undercounting is possible but the rate seems fairly reliable.

The man behind the work, Dr. Peter Pronovost, first observed events for a month within the context of the process of placing central lines (e.g., process mapping). Errors in the process steps were identified. Since these steps were simple, such as washing hands, one could partly view these errors as non cognitive errors. This suggests a control measure such as a double check to prevent such “slips”. Actually, besides slips, there may have been some at-risk behavior (2). This is behavior that increases risk where risk is not recognized, or is mistakenly believed to be justified. The main control measure used was a checklist, with the addition of having nurses double check to see that the checklist steps were properly done. Then the rate was measured again and found to be considerably lower. All of this was published (3).

It was mentioned that an alternative control measure had been tried; namely, using central lines coated with antimicrobials. This expensive control measure failed to provide a substantial reduction in infection rates. This illustrates that one must be open minded when selecting control measures. There is sometimes a bias towards fixing the “system” (e.g., such as with coated lines) rather than fixing a people issue (e.g., which often implies blame). Dr. Pronovost implemented some system control measures by getting the manufacturer of central lines to include drapes and chlorhexidine – items that should have been available at the bedside but often were not.

Another big part of this story is ongoing resistance towards implementing this control measure more widely, even after it has been shown to be effective and low cost. Any control measure can be viewed as a standard and standards are not very popular. People will argue “but our situation is different”, “ICUs are too complicated for standards”, and so on. Financial incentives (or disincentives) for standards (e.g., P4P) loom. Dr. Gawande goes on to say how complicated things are in an ICU, yet there is precisely where standards helped. A similar situation happened in anesthesiology in the late 70s and early 80s. (Here, critical incident analysis was used and is basically the same as FRACAS.) The error rate was too high, effective control measures were developed, and widespread implementation of the control measures took considerable effort. You can read about that story here.

References

1.       Gawande A. Annals of Medicine. The checklist. The New Yorker, Dec. 7th issue, 2007, see here (don’t know how long this link will work).

2.       Marx, D. Patient Safety and the “Just Culture”: A Primer for Health Care Executives http://www.mers-tm.net/support/Marx_Primer.pdf

3.       Pronovost P. et al. An Intervention to Decrease Catheter-Related Bloodstream Infections in the ICU. N Engl J Med 2006;355:2725-32.