If I could only push a button

March 30, 2011

At a recent CLSI Evaluation Protocols meeting, one of the comments was a desire for simpler documents; this lab director doesn’t want to see formulas and would basically like to have his statistical questions answered by pushing a button.

Now in one sense, this is what we all want – our job to be simpler and evaluation protocols by their nature contain statistical concepts (and concomitant formulas) that remain elusive to many. But the fact is that the topic that these evaluation protocol documents address – the quality of clinical laboratory results – is the responsibility of lab directors. Moreover, after many revisions, these documents have been simplified as much as possible. So lab directors, you need to either learn this stuff or get someone knowledgeable on your staff. There are no magic buttons.

Adverse event rates

March 29, 2011

There is a thought provoking blog post here, which references this post. The ur- reference is here (Supreme Court decision). These posts concern a lawsuit of the drug company Matrixx, whose drug caused problems. In particular, …

“Matrixx contended that the bar was statistical significance, and that anything short of that was not a “material event” that had to be addressed.”

I have commented before that the use of point estimates with confidence intervals provides more information than hypothesis testing. For either case, one has to beware that assumptions (distributions, random sampling) may not be met or that biases may exist in the study in which case the estimates are not right.

As a consultant to medical diagnostic companies, I frequently have to correct the statement from a study (especially with hypothesis testing): “Compound A was found not to interfere with assay XYZ.” The conclusion should be stated as: “Interference from Compound A was not detected with assay XYZ.”

But there is another way to view things. For a diagnostic assay, the manufacturer performs a series of relatively short studies designed to estimate relevant performance parameters. If good enough, the data are submitted to the FDA and the assay is approved. After the assay is released, millions of results are reported and this provides a potential new data stream; namely adverse events or even better, the adverse event rate (the number of adverse events over the number of assays run).

Manufacturers try to reproduce adverse events in-house under controlled conditions. This can be difficult because the exact conditions under which the event occurred are often not available to the in-house scientists. And there can be allegations that the assay was not used properly. But the point is that more attention should be paid to adverse event rates. This is the real data about assay quality.

On becoming detached

March 23, 2011

Regarding my last blog entry about getting an article published after three tries, one of the journal rejections did make an impression on me. I often get an article accepted after an initial rejection followed by a rebuttal to the reviewers and resubmission. Sometimes, the article doesn’t make it and I can accept that. However, for one of the journals this time was different. My rebuttal, however logical, engendered a response that was something like – sorry pal, fuggedaboutit.

My reaction is to start to detach. The last time this happened was when I worked in industry. Through acquisitions and management changes, I (and others) fell out of favor and all that that implies. I became more detached – when I saw problems I acted like the advice in “The Godfather” – mention it, don’t insist. Previously, I would have been more engaged and insisted. There are many dangers of detachment and one of the simple solutions is to leave, which is what I did.

Third time’s a charm – again

March 9, 2011

It seems that publishing is harder than ever. When I write about quality in laboratory medicine I either critique other articles with a letter to the editor or provide my own recommendations. This later type of article is more difficult because when I write why I’m recommending “B” and not “A”, the reviewers seem to belong to the “A” group and find ways to dismiss my piece.

The latest paper, written with George Cembrowski, is a recommendation to use error grids. It was rejected by two journals and now should appear after the third try in Clinical Chemistry and Laboratory Medicine. This did happen once before.

Risk management within financial constraints – a mini course

March 2, 2011

Actually a femto course would be more accurate. My last post was a critique. This post outlines what I would do for risk management for a clinical laboratory to protect against patient harm.

  1. Excluding phlebotomy, there are only two top level errors a laboratory can make that cause patient harm: provide an incorrect result to a clinician, or no result. (Thanks to Don Powers, where I first saw this).
  2. For incorrect results, although the severity of harm depends on several factors, the most severe harm should be used in classification.
  3. There are two types of errors and risk management tools:
    1. (potential )errors that have never occurred  – metric; probability of occurrence; risk management tool: FMEA
    2. errors that have occurred – metric; rate of occurrence; risk management tool: FRACAS

  4. Risk management attempts to answer 4 questions:
    1. what can go wrong (FMEA) or what did go wrong (FRACAS)
    2. how serious
    3. how likely
    4. what to do about it.

  5. To answer 4.1, what can go wrong requires a process map, not just of the assay but all sub processes. For example hiring and training policies could be potential causes of laboratory error.
  6. The potential failures of each process step are enumerated. These process steps may be:
    1. Basic process steps (centrifuge a sample)
    2. Detection steps (examine a sample for hemolysis)
    3. Recovery steps (prevent a hemolyzed sample from being analyzed)

  7. Numbers #1 and #2 are used to classify the severity of the effect of an error (4.2).
  8. To answer 4.3:
    1. The probability of occurrence is estimated for errors that have never occurred. This is done using judgment on a qualitative scale (e.g., 1-5)
    2. The frequency of occurrence is used for errors that have occurred.
    3. The two scales must be related (e.g., an actual error rate must be higher than the probability of an error that has never occurred).

  9. To answer 4.4, one prepares a Pareto by multiplying occurrence times severity and implements mitigations for the top items until:
    1. Rates for errors which cause serious harm are zero (FRACAS)
    2. Probability of occurrence for errors which cause serious harm is lower than some designated amount (FMEA)
    3. One runs out of funds allotted for this purpose.

The risk management described above is a combination of FMEA and FRACAS.

Risk Management – within financial constraints

March 2, 2011

My colleague Jim Westgard wrote a piece about risk management that deserves some comments. He dislikes the risk scoring schemes commonly in use because he says they “reflect subjective opinions and qualitative judgments.” He recommends that:

  • Defects are scored using probability of occurrence from 0 to 1.0
  • Severity is scored from 0 to 1.0
  • Probability of detection is scored from 0 to 1.0

I mention in passing that two of these items are probabilities but severity is not a probability and arbitrarily ranked from 0 (no harm) to 1.0 (serious harm). Since the three items are multiplied together, I don’t know what this means.

But here are my two main points. Take probability of defect occurrence first. Say a defect is a very wrong result caused by electrical noise in a response, undetected by instrument algorithms. Westgard would like to change the probability of occurrence of this event from a scale such as extremely unlikely = 1, very unlikely = 2, and so on to a specific probability from 0 to 1.0. He wants to do this to prevent subjective opinions and qualitative judgments.

Now subjective opinions about this type of error from a person on the street would not make sense. But the opinion of a group of engineers who have developed the system would be of interest and yes the opinion is qualitative. But how does Westgard propose to get a quantitative probability? Who will provide this? It is possible through experiments to get an estimate for this defect but this could involve an enormous effort and this is only one potential defect. There could be thousands of potential defect causes, often depending on other causes and each requiring detailed experiments. Remember that a wrong result can be the cause of an operator error, pre or post analytical error and not just analytical error.

My other beef is about including probability of detection (also see reference below). The problem is detection is a process (QC is just one means of detection). For any incorrect result, there are many detection possibilities. For most analyzers, operators examine samples, a series of instrument algorithms are programmed to detect questionable results, QC is performed, serial results are queried using delta checks, and so on. And because detection is a process, there is the opportunity for failure of detection (often from multiple causes). So for example, QC may have some calculated probability of success, but there is the potential for failure because the control was not reconstituted properly, there was a bad vial, the control was expired, and so on.

Moreover, detection by itself will not prevent an error. One must also have a recovery. So with QC, one does not report results until troubleshooting has been completed. But troubleshooting (e.g., the recovery) is a process and it too can fail (again with multiple causes) and its potential for failure is ignored in the Westgard treatment.

So risk management using traditional FMEA isn’t so bad after all. But if you want to do something quantitative such as quantitative fault trees, it is unlikely to be within the financial constraints of your environment.


Schmidt MW. The Use and Misuse of FMEA in Risk Analysis. Medical Device and Diagnostic Industry 2004 p56 (March), available at http://www.devicelink.com/mddi/archive/04/03/001.html