Potential Risks and Real Problems

October 22, 2006

I’ve never been that good at making up potential problems – so here’s a real one: “Roche Diagnostics Announces Nationwide Recall on Medical Device Used to Determine Blood Clotting Time” see – http://www.fda.gov/oc/po/firmrecalls/roche10_06.html .

It’s not my intention to criticize Roche as problems occur across all companies. What’s more of interest is to illustrate some observations of standard’s groups, particularly as there is one standard in development (CLSI EP 22) which attempts to allow manufacturers to reduce QC (quality control) frequency of diagnostic assays by using risk management techniques.

Although (as an advisor to the group) I have not been to any of the face-to-face meetings, I have participated in EP22 teleconferences and have been at many other CLSI face-to-face meetings. Such meetings are often held in hotel conferences rooms, where there are comfortable chairs, an ample supply of fresh coffee, croissants, and the promise of lunch around the corner. As one relaxes, one might hear how through fault tree analysis, the potential problem ABC has mitigation XYZ applied to it so that it will never occur.

Yet in the real world, things are not that simple (or slow paced). Using the formal language of risk management, one must also consider

  • have all potential problems been identified
  • are all mitigations 100% effective – many are not
  • what about the softer, non technical issues that hardly ever appear in a fault tree regarding staff, materials, and the working environment

One of the advantages of quality control is that for some problems, quality control will expose problems, regardless of whether one has knowledge of the problem source. Ironically, as a consultant I have heard during discussions where a problem has occurred – “that shouldn’t be an issue if customers run QC”.

 

 

Advertisements

Customer Misuse – What it is and what to do about it

October 18, 2006

The Issue

A manufacturer designs a diagnostic assay system and after considerable testing, the assay is approved by regulators and sold. Subsequently, there are some patient harm incidents traced back to incorrect results from this assay. Upon analysis, the manufacturer maintains that the incorrect results were caused by customer misuse.

The spectrum of customer misuse

When I worked for a diagnostic company, I remember a product that had poor reliability. In meetings devoted to solving the reliability issues, the head of engineering claimed for many of the problems that he could do nothing, because the problem was caused by customer misuse. One of these problems stuck in my mind because the customer was required on a regular frequency to disassemble a valve and clean it. To frequently rebuild the valve seemed excessive and the next generation product employed a new design which obviated this maintenance.

An example on another end of the spectrum is that some instrument systems allow the user to delay a required calibration. If the user continues to delay the calibration and to either not run quality control or to ignore failed quality control, incorrect results could easily be generated.

Although in the second case, one could argue that the user had violated the policy set up by the laboratory, the same might be true for the first case.

Of course there will be customer misuse issues which are less black and white (if in fact the above examples are).

These are hypothetical examples. A real example was reported for a home glucose analyzer (1) when users did not completely insert the reagent strip and got incorrect results leading to some hospitalizations. Not completely inserting the strip did not cause any error message and also represented an example of not following the instructions (e.g., customer misuse). In this case, the government successfully brought legal action against the manufacturer. In another example, a clinical laboratory (Maryland General Hospital) grossly violated their own policies (2).

Regarding customer misuse and blame, the taxonomy for errors described by Marx (3) is of interest.

The FMEA (fault tree) approach to customer misuse

If one sets up a FMEA or fault tree, the effect “incorrect results” can be caused by a variety of events, and the cause of these events can be the customer doing something incorrectly (customer misuse). Actually, some companies divide FMEAS into separate categories, with one FMEA devoted to customer use (e.g., misuse).

There are several questions to be addressed in these customer use FMEAs (as with all FMEAs).

  • what is the severity of the event caused by misuse– e.g., will it lead to potential patient harm such as potential causing incorrect results
  • what is the estimated probability of occurrence
  • what is the best control or mitigation to prevent this misuse error from occurring
  • what is the best way to detect this misuse error and recover from it

The spectrum of solutions for preventing customer misuse

Just as there is a spectrum of customer misuse, there is a spectrum for the mitigations to prevent customer misuse. The default mitigation for customer misuse is virtually always customer training ranging from the instruction manual (and offshoots such as videos) to onsite training. One must understand that since there is always an instruction manual (or package insert), the mitigation is to improve the instruction manual. The mechanism to do this involves usability testing. The other end of the spectrum is redesign. As the reliability consultant Ralph Evans suggested “make it easy to do the right thing and hard to do the wrong thing.” A previous essay on a medical error also illustrates this spectrum.

Some of the mitigations are also not that black and white. Consider a manufacturer that has conducted extensive interference testing for an assay and has reported in the product insert that 7 drugs interfere with the assay and when any of these substances are present above the concentrations listed, the manufacturer’s assay should not be used. If the clinical laboratory is wired into the hospital’s EMR (Electronic Medical Record) assuming that an EMR exists, one could suggest that rules could be built into the LIS (Laboratory Information System) to follow the manufacturer’s recommendation. Without these computerized systems, one would have to manually inspect (potentially) each patient’s medical record, which is a daunting task.

The current environment

Ever since the Auto Analyzer was invented, the trend has been towards instruments that are easier to use. Since clinical laboratory staff are less trained today, manufacturers design their products to be easier to use to gain competitive advantage and advertise this feature. Regulators such as the FDA also recognize the value of ease of use and require hazard analysis.

Ease of use is thus a key product attribute for many systems and fulfills Ralph Evans suggestion. But there will nevertheless always be customer misuse issues and each one must be considered with the result that some will be shown to be the responsibility of the manufacturer, some the responsibility of the clinical laboratory and for some agreement of responsibility will never be reached.

References

  1. Assay Development and Evaluation: A Manufacturer’s Perspective. Jan S. Krouwer, AACC Press, Washington DC, 2002, pp 1-3.
  2. See: http://www.westgard.com/essay64.htm.
  3. Marx, D. Patient Safety and the “Just Culture”: A Primer for Health Care Executives http://www.mers-tm.net/support/Marx_Primer.pdf

Proficiency testing and six sigma metrics as a measure of analytical quality in laboratory testing

October 14, 2006

Proficiency testing has long been used to assess the analytical quality of laboratory testing. Hospital laboratories send in several quality control results a year which are used to assess an assay’s analytical performance.

Much has been said about using six sigma calculations as a measure of quality. A recent paper by Westgard^2 discusses the analytical quality of laboratory testing based on proficiency testing (1). Here are some problems with the use of proficiency testing in general and this use of six sigma.

Proficiency testing data processing rules often throw out outliers – That is, computer program rules are set up to automatically delete data (see outliers in proficiency testing essay). The rationale often used is that there must have been a sample mix-up (or something similar that’s not part of the analytical process) and it’s ok to delete these values because a sample mix-up is a pre-analytical error and proficiency testing is supposed to inform about analytical quality. But what if the outlier result was from the analytical process. How does one really know? With the data coming from thousands of hospital laboratories, there is no practical way to find out.

Proficiency testing misses problems from interferences as well as shorter term random errors – As part of a method evaluation recommendation, I looked at a year’s worth of performance complaints that appeared in Clinical Chemistry (2). Most complaints (71%) were about interfering substances, a type of analytical error that would likely be missed in proficiency testing. In fact any shorter term error source would likely be missed in a proficiency testing program (see equivalent QC essay). So what is being measured is not the analytical process, it’s a subset of the analytical process. How serious is this? It can be very serious (3). To measure potential problems from interfering substances, one must conduct a method comparison using patient samples.

Westgard’s six sigma calculations are based on a model – This model (1) assumes that the data are normally distributed (see using the wrong model essay). But what if they aren’t. I’ve looked at thousands of datasets. Some are normally distributed, some aren’t. This is important because the actual distribution could contain a lot of data in one or both tails of the distribution (an example is the log normal distribution). This would mean more defects and a lower six sigma result than an equivalent normal distribution.

The error goals are not severity based. –The error goals used in reference 1 are CLIA based. As an example, glucose CLIA goals are 6 mg/dL or 10%, whichever is higher. This ignores the concept of different severity for different errors as expressed in a Clarke (or Parkes) error grid. So the whole FMEA concept of ranking errors by severity is lost since all errors are treated the same.

Some recommendations (Some of these recommendations will appear as references 4-5).

Outliers – If there is no practical way to investigate potential outliers, then report the data two ways – one with all data, the other without data declared to be outliers.

Goals – Use error grids to divide errors into severity categories. This means that other than for glucose, error grids need to be developed. The location of the proficiency sample concentration has to be optimally chosen with respect to the error grid.

Six sigma estimates – Don’t use models – simply count the data in each zone of the relevant error grid. For example, in reference 1 there are 9,258 laboratories that reported data for cholesterol. If each lab submitted three specimens per year, there are 27,774 data points. That’s plenty to get accurate estimates by counting.

Estimating a subset of analytical performance – There’s no solution to this problem in proficiency testing. One should not imply that all of analytical performance is being measured by proficiency testing.

How much quality control is required – See more QC essay. All the problems in proficiency testing remain in quality control testing. Thus, one should not imply that all of analytical performance is being measured by quality control.

References

  1. Westgard, JO and Westgard SA. The quality of laboratory testing today. Am J Clin Path 2006;125:343-354.
  2. Krouwer JS Estimating Total Analytical Error and Its Sources: Techniques to Improve Method Evaluation. Arch Pathol Lab Med 1992;116:726-731.
  3. Cole LA, Rinne KM, Shahabi S., Omrani A. False positive hCG assay results leading to unnecessary surgery and chemotherapy and needless occurrences of diabetes and coma. Clin. Chem. 1999;45:313-314.
  4. Krouwer JS Uncertainty intervals based on deleting data are not useful. Clinical Chemistry 2006;52:1204-1205.
  5. Krouwer JS Recommendation to treat continuous variable errors like attribute errors. Clinical Chemistry and Laboratory Medicine 2006;44(7):797–798.

Risk Management II – Beware of “That can be solved by risk management”

October 1, 2006

The value of risk management is becoming more widely recognized in laboratory medicine since risk management can deal with all clinical laboratory errors including:

  • pre-analytical
  • analytical
  • and post-analytical errors

Unfortunately, this has led to some misconceptions about risk management. The purpose of this essay is to further explain the use of risk management (see previous essay).

A brief review

Error events are classified in risk management using two quantities:

  • the severity of the possible adverse consequence and
  • the probability of occurrence of each event
    • probability is used for potential error events
    • frequency of occurrence is used for observed error events.

Two types of risk management: quantitative vs. qualitative

Probability (or frequency) of occurrence can be assessed qualitatively (e.g., 1-5) or quantitatively. Expert judgment is often used for qualitative assessment while modeling and counting are used for quantitative assessment. Unfortunately, one can usually infer that most people that advocate use of risk management in laboratory medicine, mean qualitative risk management.

The problem with qualitative risk management

To understand the problem with qualitative risk assessment, consider assay precision, which has a long tradition of quantitative assessment. In risk management terms:

quantitative assessment means a traditional precision experiment to quantify the SD and CV of an assay so that one can quantify the probability of assay values that exceed a desired limit.

qualitative assessment means that by using judgment (typically no precision experiment would be carried out) one would classify that an assay value that exceeds a desired limit is very likely, somewhat likely, or not likely (or other qualitative categories).

No one would tolerate a qualitative assessment of precision but this is how risk management is often positioned for many other assay attributes, particularly those that are difficult to quantify such as large infrequent errors (e.g., outliers). Moreover the distinction between qualitative and quantitative risk management is not made, one simply hears: “that problem can be handled by risk management.” This works most likely because of the unfamiliarly with risk management in the clinical laboratory. For example, quantitative risk management terms such as “minimal cut sets” never appear in risk management proposals – this term does not appear in the ISO standard (14971) on risk management. Yet quantitative risk management does exist and quantification often involves time consuming, expensive experiments often with huge sample sizes. The qualitative alternatives are quick to perform but lack the confidence available in actual data.

So be on the look out for people that advocate “risk management” as an alternative to quantifying things.