Reading Quality Digest can be dangerous to your health

June 17, 2008

right tool for jobIn the June 2008 issue of Quality digest, there is an article by Jay Arthur entitled “Statistical Process Control for Healthcare” (1). After the usual boilerplate type of introduction, something caught my eye; namely, the so called good news that there is “inexpensive Excel based software to create control charts … .“ This made me go to the end of the article where sure enough the author just happens to sell such software. This may have been a good place for the author to introduce the term bias.

To understand a more serious problem with this article, consider a hospital process; namely analyzing blood glucose in a hospital laboratory. Because such a process has error, quality control samples are run. Say such a control has a target value of 100 mg/dL.  The values of the quality control samples are plotted by SPC software and rules are formulated. If the glucose control value is too high or too low, the process is said to be out of control and action is taken.

Now,  Mr. Arthur is trying to push SPC software not for a process but for errors in the process. For example, he uses the infection rate in a hospital. But the infection rate error is not a process that one wants to control – of course one does not want it to become worse – but its target is zero.

A more useful example than the hypothetical one provided by Mr. Arthur was published recently (2). Here, the authors were faced with an undesirable hospital infection error rate and set out to observe where errors occurred in the process of placing central lines. They then provided control measures and continued to track the error rate, which was reduced to zero. This is not SPC! It is much more like a FRACAS (Failure Reporting And Corrective Action System).

In another part of the article, Mr. Arthur suggests that “never events” can be tracked by SPC. Never events – a list of 28 such events have been put forth by the National Quality Forum – have as implied, targets of zero. Such an event is wrong site surgery. One should use something like FMEA (Failure Mode Effects Analysis) to reduce the risk of such events. It is silly to suggest SPC software for never events.

References

1.   See. http://www.qualitydigest.com/currentmag/articles/03_article.shtml

2.   An Intervention to Decrease Catheter-Related Bloodstream Infections in the ICU. Pronovost P, Needham D, Berenholtz S, Sinopoli D, Chu H, Cosgrove S, Sexton B, Hyzy R, Welsh R, Roth G, Bander J, Kepros J, Goeschel C N Engl J Med 355:2725, December 28, 2006


Westgard Quality Control Workshop – Part 3

June 5, 2008

dohI just returned from the Westgard quality Control Workshop, where I was a speaker and have a few blogs worth of comments – this is the third.

EQC – Equivalent Quality Control

This is the CMS proposal (1) to allow clinical laboratories to reduce the frequency of quality control from twice per day to once a month given that 10 days of running QC shows no values that are out (and given some other conditions).

Let’s try to construct a hypothesis to base such a recommendation. For example:

given any possible error condition that could be detected by external quality control, internal quality control would detect the same error 100% of the time.

This is about the best I can think of, which would result in the recommendation:

Stop running external quality control.

What does running 10 days of external QC with no out of control results show? The answer is nothing. This is because one can assume that during these 10 days, there were either no errors or if there were errors, external QC was not able to detect them. (It is possible that internal QC detected errors during these 10 days). In fact, this experiment is guaranteed to be meaningless. To see this, one must realize that internal QC is always “on” and precedes external QC. So to see if external QC is redundant to internal QC for an error, would mean that internal QC would detect the error and either shut down the system or prevent the result – this being the external QC sample – from being reported. However, one can get different information by running external QC for a longer period because if internal QC misses an error but external QC detects the error, then one has proved that external QC is not redundant to internal QC. This was shown to me (2) as out of control results for a range of assays ranging from 1 to 10 per year, where these were real problems. Since controls are run twice per day, the number of affected patients samples is larger.

So a lab that reduces external QC to once a month is risking an even larger number of patient samples which is made worse since the clinician has probably acted on the erroneous results.

Rather than do the experiment suggested by CMS, a lab can simply examine its external QC records for a sufficient length of time.

References

1.       To review, see: See http://www.aacc.org/events/expert_access/2005/eqc/Pages/default.aspx

2.       Personal communication from Greg Miller of Virginia Commonwealth University


Westgard Quality Control Workshop – Part 2

June 5, 2008

measureI just returned from the Westgard quality Control Workshop, where I was a speaker and have a few blogs worth of comments – this is the second.

How does one determine acceptable risk

This was one of the questions asked by a participant – are there any guidelines? I also commented recently, that in spite of all of talk about risk management and putting in place control measures until one has acceptable risk, no one knows what acceptable risk means. Here’s some more thoughts on this.

There are different risks (1). These can be enumerated. These include:

perception – complaints from either hospital or non hospital staff

performance – traditional quality, including errors that can affect patient safety

financial – errors that threaten the financial health of the service including lawsuits

regulatory – errors that threaten the accreditation status of the service

So first, one must say which risk one has in mind. One can envision an acceptable regulatory risk (we always pass inspections) but an unacceptable patient safety risk.  Note also, that the risks are not necessarily unique. One can have a patient safety failure with or without a lawsuit.

Assume the risk in question is the performance risk and specifically about patient safety. The Cadillac version of assessing risk would be to perform a quantitative fault tree and arrive at a numerical probability of patient risk. This is unlikely and one would probably have a qualitative assessment. Whether the assessment is quantitative or qualitative, this still hasn’t answered the acceptability question.

The problem is there is no easy answer to this question. If one had unlimited funds, one could lower the risk to whatever level was desired but funds are limited by the economic healthcare policy of the laboratory’s country (2). So one answer of acceptable risk is how this economic policy is translated into regulations. (e.g., one follows existing regulations and passes inspections). Yet, this is only a quasi legal way of stating acceptable risk.

Recommendation

I suggest that risk be assessed by traditional means (FMEA, fault tree) which includes a Pareto chart or table to rank the risks. Then, if one optimizes the money that one has in implementing control measures (mitigations) by a portfolio type means, then one has an acceptable risk under the imposed financial constraints.

portfolio analysis

References

1.       Managing risk in hospitals using integrated Fault Trees / FMECAs. Jan S. Krouwer, AACC Press, Washington DC, 2004.

2.       See http://covertrationingblog.com/


Westgard Quality Control Workshop – Part 1

June 5, 2008

 

measureI just returned from the Westgard quality Control Workshop, where I was a speaker and have a few blogs worth of comments – this is the first.

What’s Missing from Clinical Laboratory Inspections

At the Westgard Workshop, most of the participants were from clinical laboratories and I was impressed with how smart these people are. I also got a sense of a tremendous regulatory burden. From the CAP CD, I obtained at the Workshop:

      The mission statement of the CAP Laboratory Accreditation Program is:

“The CAP Laboratory Accreditation Program improves patient safety by advancing the quality of pathology and laboratory services through education and standard setting, and ensuring laboratories meet or exceed regulatory requirements.”

I have had mixed feelings about inspections that certify quality and have previously reported my experience with an industry quality program – ISO 9001 (1).

Here’s my assessment of clinical laboratory inspections to certify laboratories. It would seem that the premise of these inspections is to ensure that specific policies and procedures are in place and executed as proven largely by documentation, which guarantees high quality. So what’s missing? As far as I can tell – and it is with great difficulty to read through these materials – that there is no measurement of error rates. Without such measurements, quality is unknown.

Recommendation

The regulatory bodies would describe a list of errors and their associated severities. The severities would be given numerical values such as the VA hospital system which uses 1-4. Every clinical laboratory would record each error (failure mode) that occurs in their laboratory, its severity, and its frequency (default frequency is of course 1).  They would multiply frequency x severity for each unique error (failure mode), add this up and get a rate by dividing by the number of tests reported per year.

Failing to count errors would be a serious violation.

This would be the start of a new premise for the regulatory bodies. Measure quality – if it’s unacceptable, the clinical laboratory would suggest and implement process changes. It’s a simple closed loop process. With emphasis on measurement, reliance on documentation should decrease and inspections should be less burdensome.

closed loop

References

1.       Krouwer JS. ISO 9001 has had no effect on quality in the in-vitro medical diagnostics industry. Accred. Qual. Assur. 2004;9:39-43