Wrong Site Surgery Rates in Minnesota

January 27, 2011

From a mailing list that I get, I became aware of patient safety events in Minnesota. In particular there were 44 wrong site surgery events in 2008 out of 2.6 million surgeries or 1 out of every 60,000 surgeries or a rate of 0.0016923%. Wrong site surgery is the plane crash of hospital errors although as pointed out in this report and elsewhere, wrong site surgery often causes less patient harm than thought. During the same reporting period there were 38 cases of retained foreign objects, which is usually more harmful than wrong site surgery.

If the wrong site surgery sounds like a low rate, it also translates to about 17 events per million and many clinical laboratories report a million results per year. The plane crash of errors in clinical laboratories is probably a patient sample mix-up (or perhaps a severe interference). One reported patient sample mix-up rate was 0.0038928% which is twice the wrong site surgery rate. And a patient sample mix-up is something that can’t be controlled by analyzing an assay for precision and bias. Nor does it have anything to do with the inner workings of an analyzer instrument. Mitigation of patient sample mix-ups requires risk management methods.

The consequence of a patient sample mix-up can of course be no harm, if say a negative new born screening patient is mixed-up with a different negative new born screening patient. But the potential for a serious consequence is still present and the event can be prevented.

Wrong site surgery was addressed by the universal protocol but this is clearly not effective in preventing all wrong site surgeries. Thus, a one size fits all FMEA doesn’t necessarily work. Strictly speaking one should use the term FRACAS, because one is trying to reduce rates, not prevent something that has never happened. So we need to pay more attention to risk management.

Risk Management Course

January 20, 2011

I see that my colleague Jim Westgard is providing a risk management course. I’m sure he’ll do well with this, as he has done with all of his other programs but here are some of the challenges:

No perceived need – To engage in a formal risk management program requires a commitment of resources. My perception is that most managers believe they already have a handle on risk management partly because everybody does know something about risk management even though it may not include formal methods. Moreover, the benefit of risk management is preventing problems, which is less appealing to management than revenue generating ideas. (Of course, preventing problems reduces cost, but …).

Moreover, clinical laboratories are used to evaluating things like average bias, imprrecision, and linearity. Even when total error methods are used such as Bland Altman, what’s assessed is the location of 95% of the differences. Risk management looks at a tiny fraction of the results – much less than 5%. And it is not restricted to the analytical properties of the assay. A patient sample mix-up could be just as deadly as an interfering substance.

Along these lines, for industry and clinical laboratories that pass all inspections, there is the notion that everything is ok.

For industry products, where a FMEA or fault tree analysis (FTA) is required by the FDA for product approval, these FMEAs and FTAs are often documentation of the design rather than true FMEAs and FTAs that challenge the design.

Lack of resources – Whereas a formal risk management program is not expensive, it is not free either and today there is a tremendous pressure not to spend. Purchasing software tools is optional. The cost of risk management is mainly time – people spending time in meetings. A couple of times in industry, I was asked to conduct risk management activities during lunch because management couldn’t spare time away from projects. This was a bad sign and had a bad outcome.

Beware of the management of risk management – This is a somewhat tricky subject – full disclosure – Krouwer Consulting provides consulting on risk management. Risk management that encompasses FMEA, FTA, FRACAS, and reliability growth management originated in the defense industry and these methods are widely used in aerospace and automotive besides defense. These methods have been adapted to healthcare and in vitro diagnostics. They are not hard to learn or apply (in my case I knew nothing about these methods until I had guidance from a newly hired expert) but I have encountered people who have either not studied these methods (or have no experience) but are in positions to oversee their use, including writing guidance documents.

Proficiency – sometimes you are less proficient than you think

January 8, 2011

I was watching The Girl Who Kicked the Hornet’s Nest on my computer via Dutch TV and the Internet. The film is in Swedish so naturally there were Dutch subtitles. I think of myself as fluent in Dutch but I noticed quite a difference between a foreign language film with English subtitles and Dutch subtitles. With English subtitles, I read and understand the subtitle in a fraction of a second – it’s so fast that it doesn’t interrupt my watching the action on the screen. With Dutch subtitles, it’s different – although I read the subtitles OK, it takes longer and I’m noticeably going between reading the subtitles and watching the action. I am less proficient in Dutch than I thought.

Quite a few years ago, I learned to ice skate and the next year I started to play hockey. The first time on the ice, someone passed the puck to me but the puck was between my skates and stick. Having watched a lot of hockey on TV – this was the heyday of the Boston Bruins – I knew what to do. I could stick out my foot and let the puck bounce off and capture it with my stick. But my legs wouldn’t move and I wistfully watched as the puck whizzed by. The problem was that although I thought I was a good skater, all of my moves were well planned in advance. To make any move spontaneously was beyond my ability. After playing hockey for a year or two, my skating had improved so I was a proficient skater. Unfortunately, I was never proficient at hockey although this league did include former college players.

A while ago, there was a risk management conference call at CLSI – the call participants were writing a standard on risk management. I don’t remember what prompted me to ask this question but it was about HAMA interferences in hCG assays. To recall, women had been injured by being treated for cancer due to falsely high hCG results due to HAMA interferences. My question was what were lab directors doing to prevent this situation, which was not a onetime problem. In particular, it seemed that diluting the sample could test for these interferences. Two lab directors – both well known – gave the same response; that this is a rare but unavoidable problem. The suggestion to dilute samples was economically infeasible. Now to wave off my suggestion may be appropriate – financial constraints are real; but to say that this issue was unavoidable makes me question the lab directors’ proficiency in risk management. One should look for other solutions.

EP21 Again

January 6, 2011

So EP21, the CLSI standard about total error, enters its fourth year in trying to advance from the “A” version to the “A2” version. Seems a pity since the changes to the A2 version are minor (all the calculation methods are identical). The biggest complaint about the new version was caused by an error that I made in overstating that pre- and post analytical error are part of the total error in EP21. I revised things to say that the opportunity for pre analytical error should not be excluded from EP21 experiments (rather than that EP21 would include all types of pre analytical error). But people still freak out at any mention of pre analytical error for EP21.

Any experiments to evaluate clinical laboratory assay error are just that – experiments. The protocols are not identical to the actual use of the assay but try to get as close as possible.

I contend that pre analytical error is not only  part of the total error in EP21, it is also part of the bias in EP9 or part of the imprecision in EP5 and so on for other EP protocols.

Here is a pO2 example. For manufacturers (and perhaps some laboratories) for an EP9 pO2 bias experiment, the pO2 reference method is a certified tank of gas which circulates the gas in whole blood in a tonometer. A technician pulls some blood from the tonometer into a syringe to inject into the candidate instrument. Any air that is left in the syringe will alter the pO2 value and contribute to the result and likely be an error. Of course, an experienced technician will expel all air from the syringe and minimize the pre analytical error but not all technicians are experienced, and even experienced technicians make errors, and not all bubbles are easily seen. One could eliminate this source of pre analytical error by somehow getting an engineer to directly connect the tonometer to the candidate instrument using valves so that blood could be sampled directly into the instrument without the need for a syringe. Of course this is not done and should it be done, it would be bogus, because the pre analytical error that comes from the syringe is part of the assay error.

For another example, a whole blood potassium analyzer that is compared to a laboratory potassium result can potentially have (pre analytical) error due to hemolysis of the whole blood cells.

In another example, a point of care glucose meter that is compared to a laboratory glucose result can potentially have (pre analytical) error due to an improper finger stick for the point of care analyzer when compared to the venous sample drawn from the same patient for the laboratory analyzer.

For the pO2 and potassium examples, the pre analytical error comes with the experiment and represents what may occur with the actual use of the assay. It is with the glucose example that a caution is issued new to EP21 but equally valid for EP9, EP5, and other EP protocols – that it would be wrong to exclude the potential error from finger sticks by drawing venous blood for the point of care device.

Even for cases when a split serum sample is used, user error can occur that is independent from the analytical system. When I was head of conducting trials for a manufacturer, we noticed that customers in clinical trials almost never obtained results as good as our internal results. Although it is hard to pinpoint the source of the differences, the experience of company technicians – who ran the same assay for years – probably played a role. Thus, inexperienced operators did things – even on highly automated systems – to give worse results.

So the bottom line is that EP21 is merely mentioning that for most assays pre analytical error may be contained in the result and that if a situation arises such as the glucose example, don’t go out of your way to exclude pre analytical error by doing something that is outside of the routine use of the assay.

And finally, the “total” in total error is probably not the best term but I don’t know of a better one. This “total” error really means the “total” error from the experiment, which is of course a subset of the true total error which contains error from all reagent lots, all calibrations, and so on.

Rethinking Quality Control (QC) in Clinical Chemistry

January 5, 2011

In the world of laboratory medicine with a lot of focus on molecular assays, it’s nice to see an article on the quality of basic clinical chemistry. In the January issue of Clinical Chemistry (subscription required) an article shows that QC doesn’t work very well for a number of assays because with a reagent change, the QC results are different than the patient results. That is, for a reagent change, QC results were compared before and after the change as were a number of patient samples (I assume it was the same patient samples). The before and after differences were then compared.

I would have liked to see another control with the same experiment conducted without changing reagent lots, just to confirm that differences between QC and patients wouldn’t be observed here as well.

I suspect that the study was performed – BTW the first author Greg Miller is the president elect of AACC – because of experience that things didn’t look right upon reagent changes. For the one assay for which raw data was supplied, this looks like the case.

So with CMS’s new regulation that allows for reduction in the frequency of QC, it seems more prudent to make sure that QC is doing what it’s supposed to do.