A replay

July 14, 2010

I recently published a Letter to the Editor to Clinical Chemistry, which critiques a simulation of glucose error on insulin dosing. The authors’ reply to my Letter leaves something to be desired. Since a subscription is required to view these, here is the gist of my Letter and the authors’ reply.

The authors modeled glucose error as two times the CV plus bias to simulate 95% of the error. I argued that one needs to also

1)      model random patient interferences

2)      model user error

3)      to model 95% of the error results isn’t good enough – they need to model 100% of the errors.

They answered #1 only saying I was correct but they don’t know how to model random patient interferences. I don’t think it’s that hard. Putting  – “hematocrit interference” glucose –  in Google yielded 435 results including a paper by one of the authors! (Evaluation of the impact of hematocrit and other interference on the accuracy of hospital-based glucose meters). Another title in this list is: A mathematical model to assess the influence of hematocrit on point of care glucose meter performance. Of course, there are other interfering substances and general methods to assess them.

But more importantly, the authors did not answer points 2 and 3 in their reply.

In 2001, I critiqued a similar paper (one of the current authors c0-wrote the earlier article) and their reply was similar. In my earlier critique, I only mentioned point 1 above.

Advertisements

The real story about acceptable risk for diagnostic assays

July 6, 2010

Both manufacturers and clinical laboratories use risk management techniques such as FMEA (Failure Mode Effects Analysis) which involve:

  • enumerating potential failure mode events that could cause patient harm
  • classifying the events with respect to severity and probability of occurrence
  • preparing a Pareto chart or table of events with the highest severity X probability
  • implementing mitigation for the most important events

One is done when the mitigations have reduced the residual risk to an acceptable level. Unfortunately, there is no guidance as to what is an acceptable level and this is the subject of this blog entry.

Manufacturers could consult the ISO standard on risk management 14971. This suggests performing a risk benefit analysis. While this might work for some medical devices, it is not useful for diagnostic assays. Consider glucose meters, as an example. Each year, a small number of patients are seriously harmed by incorrect glucose meter results. Yet, if all glucose meters were removed from the market, the harm from the lack of information would be much worse.

For clinical laboratories, the decision of acceptable risk is up to the laboratory director. Do laboratories directors have some special power to know when residual risk is acceptable? The answer is no, and to explore how that decision is made – which applies to manufacturers as well – consider a blood gas analyzer. The results from these instruments are time critical. If a result is unavailable due to the instrument going down, serious harm could occur. Therefore, laboratories use the mitigation principle of redundancy and have multiple instruments. If one knows the MTBF (Mean Time Between Failure) and MTTR (Mean Time To Repair), one could calculate the probability (e.g., risk) that a result would be unavailable due to all instruments having simultaneously failed, as a function of the number of instruments. Whereas, one could lower this risk to an infinitesimal amount, one is financially constrained and it is this constraint that plays a major role in determining the level of acceptable risk. And the basis of the financial constraint is the socioeconomic climate of the prevailing country.

One would like to think that risk is always low enough so that if serious harm does occur, it is due to some unanticipated event. (Of course, this brings up another issue, which is how much effort should be devoted to studying things to reduce the risk of unanticipated events). But there are known events which can cause serious harm. I once heard from some laboratory directors that in spite of the fact that they know that HAMA interference can cause wrong results and has caused serious harm, they do not put in place procedures to test for HAMA interference in results, due to economic constraints.