Formal and informal risk management

July 6, 2011

Our paper on error grids has been published online. As is often the case, there is another opinion paper that comments on our paper. One of the comments suggests we talk about “difficulties by clinical laboratories in adopting risk management techniques.” Actually, what we said is that “the risk management techniques including FMEA (Failure Mode Effects Analysis), fault trees, and FRACAS (Failure Reporting And Corrective Action System) are not well understood by clinical laboratories.”

To expand on things – and I’m sure previous blog entries cover this – the commentator cites ISO 15189 as a valuable tool to implement risk management. ISO 15189 is like ISO 9001 and I have commented on problems with ISO 9001 (1) and also with Six Sigma (2).

On another level, working on CLSI evaluation protocols which are mainly statistical, I notice that 1 or perhaps 2 people on a subcommittee know the statistics. The other people on the subcommittee make valuable contributions but steer clear of the statistics because they don’t know the statistics. But risk management is different. Everybody knows something about risk management – such as when driving, can I safely change lanes? But few people are knowledgeable about formal risk management as practiced by aerospace or automotive engineers and it is these techniques that have been adapted to healthcare. The problem is that standards are written by groups where all participants have an equal say on the topic due to their informal knowledge but they lack the formal knowledge. So what gets transferred to everyone misses the mark.

The commentator pleas for a systematic approach that “encompasses an infrastructure able to capture and – particularly – learn from adverse patient outcomes.”  But this is exactly what FRACAS does – which the commentator lists in his opening.

So I stand by our original statement: “the risk management techniques including FMEA (Failure Mode Effects Analysis), fault trees, and FRACAS (Failure Reporting And Corrective Action System) are not well understood by clinical laboratories.”

References

  1. Krouwer JS. ISO 9001 has had no effect on quality in the in-vitro medical diagnostics industry. Accred. Qual. Assur. 2004;9:39-43.
  2. Krouwer JS. Six Sigma can be dangerous to your health. Accred Qual Assur 2009;14:49-52.
Advertisements

Evidence based error grid limits

July 3, 2011

Our paper on error grids has been published online. As is often the case, there is another opinion paper that comments on our paper. One of the comments disagrees with us that error grid limits should be based on clinician opinion. The commentators favor limits “defined retrospectively, after having accumulated, analyzed and troubleshot a large number of clinical adverse events strictly related to laboratory errors which have arisen throughout the total testing process.” One rationale for this comment is that it is an evidence based approach.

Now I’m all for this approach when it is practical. For example, to assess the best treatment for prostate cancer, rather than performing a series of randomized clinical trials, one could follow up for several years with a questionnaire, the 200,000 patients diagnosed each year (USA) with prostate cancer. If this had been done 20 years ago, we would have 2 million records for patients diagnosed at least 10 years ago.

But I would challenge the commentators to provide evidence that there would be sufficient data for their evidence based approach. How many times does one know that laboratory error was responsible for an adverse event and just as important, the magnitude of the laboratory error? Moreover, the only errors that would help one decide on limits are errors that are close to the proposed limits. For example, we do not need an actual case of a glucose meter reading 300 mg/dL when truth is 30 mg/dL to know that this error is harmful. We would need to see actual adverse events for smaller errors with the magnitude of the error known precisely. Sorry, but I don’t see this happening anytime soon.

But there’s another important reason for not favoring the commentators’ suggestion. Going back to the prostate cancer example, each year, unless there’s a cure for prostate cancer there will be 200,000 records that can be used to assess side effects by treatment, recurrence by treatment, and many other things of interest to patients. The data is there, all we have to do is collect it. But waiting for adverse events due to laboratory error in order to set limits is akin to waiting for planes to crash in order to improve safety. Of course when planes do crash, the information is used but design and regulation are preventive measures. Airplane safety is a good comparison to laboratory error because about 85% of airplane accidents are due to pilot error and the rest are due to aircraft problems.


Clinical vs. regulatory specifications

July 2, 2011

Our paper on error grids has been published online. As is often the case, there is another opinion paper that comments on our paper. One of the comments is that our classification of specifications as either clinical or regulatory is equivocal and potentially dangerous. Maybe our writing was not clear enough. We are just stating the way things are.

Thus, some specifications (clinical specifications) state what is needed medically, whether it can be achieved or not. It is hard to find such specifications and in our paper we cited a troponin specification, because it was proposed by cardiologists and it was not achieved at the time.

Other specifications (regulatory specifications) state limits which are currently being achieved. Regulators must do this because if they raised the bar above the currently achieved performance, the assay would not be available which could cause more harm than allowing it to be used.

Thus, our paper just states the difference between these specifications and adding clarity is not equivocal. It is what it is.

What I object to is that some specifications such as the ISO glucose meter specification 15197 try to mix up these two specification types by saying that this ISO (regulatory) specification is based on medical need. Again, regulatory specifications are based on current assay performance and the fact that withholding the assay from use would be more harmful than allowing the assay to be used.