FMEA and Validation – 2/2006

February 13, 2006
FMEA and Validation – 2/2006

In conducting a FMEA, one goes through the steps of

  • Modeling the process (often with help of process flowcharts)
  • Postulating all potential errors
  • Classifying all potential errors
  • Ranking the classified errors
  • Proposing mitigations for the top errors
  • Performing a FMEA on the new process (e.g., the process as changed by the mitigations)

If any of these steps is likely to be neglected, it is the last one – that of performing yet another FMEA! (sounds recursive too, since a subsequent FMEA can cause more changes). The purpose of this essay is to consider validation of a FMEA, which could be thought of as part of the task of performing a FMEA on the new process (e.g., as changed by mitigations).

An Example

Recall a model used in FMEA; namely, the error, detection, recovery model (see figure), where one is trying to prevent the effect of an error, given that an error has occurred. (see also the near miss essay).

For the example, consider the process steps when a sample arrives for analysis at a hospital laboratory (1). One of the steps is to examine the sample visually for lipemia, and if this condition is observed, to perform a “recovery”, often by notifying the source that sent the sample and or by further processing the sample. Assume that the original error occurred outside of the laboratory that is responsible for analyzing the sample. This is a common situation although it is also possible that the hospital laboratory that analyzes the sample may also be responsible for preparing it.

To put some numbers on this example, assume that the hospital laboratory receives 100,00 samples per year and that 1% of these samples should fail the criteria for lipemia. This means that 1,000 samples are lipemic. Now one may reason that all lipemic samples will be detected and a recovery performed because detection and recovery steps are in place. However, consider what would happen if these steps did not always work. Assume that the detection step was 95% effective and the recovery step was 99% effective. This means that of the 1,000 samples that are lipemic, 50 will not be detected and they will be analyzed in error. On the other hand, of the 950 samples that are detected, 9.5 will fail recovery, meaning that the total number of samples subject to the error effect is (on average) 50+9.5 = 59.5/100,000 or 0.0595%.

To summarize:

  • the error event frequency is 1% = 0.01×100,000 = 1,000, with the error event being a lipemic sample arrives for analysis
  • the error event effect frequency is 59.5/100,000 = 0.0595% with the error event effect being a lipemic sample is analyzed

Assume also, that the number of samples for which lipemia would cause a result error is 2%. This means that for the original 100,000 samples, a higher level observed error effect of wrong answer is the combined probability of ((59.5 / 100,000) x (2,000 / 100,000))*100,000 = 1.2 samples on average every year. This error could in turn result in the spectrum of no patient harm to a patient death but the point of this essay is to go back to the FMEA steps that have been put in place to detect and recover from the original error (rather than to focus on outcomes).

Validation

In this example, I arbitrarily set detection success at 95% and recovery success at 99%. The laboratory person responsible for quality might argue that both steps are failsafe and hence virtually 100% effective. If there is a valid criterion for lipemia it might be hard to imagine how one could miss detecting it or fail to initiate a recovery – nevertheless, validation provides objective evidence that detection and recovery goals meet objectives. To set up a validation experiment for detection, one might have an independent observer rate all samples for lipemia, in a way that does not interfere with the routine process in place for examining the sample and then one can tally results as:

Independent Observer Routine Observer – Lipemic Routine Observer – Not Lipemic
Lipemic Match Error
Not Lipemic Error Match

In this experiment, one is assuming that the independent observer is correct. An additional part of the validation experiment is the sample size. That is, say the independent observer has checked 100 consecutive samples and found no mismatches. The table might look like:

Independent Observer Routine Observer – Lipemic Routine Observer – Not Lipemic
Lipemic 1 0
Not Lipemic 0 99

The observed error rate for each of the two possible error types is zero but the 95% confidence interval (2) for the two mismatch error rates are:

Independent Observer Routine Observer – Lipemic Routine Observer – Not Lipemic
Lipemic 95%
Not Lipemic 2.98%

The problem is that there has only been 1 opportunity to misclassify a lipemic sample so the confidence interval actually says that this error rate could be as high as 95%! Say one goes back and rigs the experiment to include 10% lipemic samples and runs the experiment for 500 samples and gets the following results.

Independent Observer Routine Observer – Lipemic Routine Observer – Not Lipemic
Lipemic 50 0
Not Lipemic 0 450

The observed error rate for any error is again zero but the 95% confidence interval for the two mismatch error rates are now:

Independent Observer Routine Observer – Lipemic Routine Observer – Not Lipemic
Lipemic 5.8%
Not Lipemic 0.66%

So even with all of this work, one has only “proved” (e.g., with 95% confidence) that one has about a 94% or better error detection success rate of detecting all of the lipemic samples. Of course, it is also possible that mismatch rates will be non zero. The same arguments apply to recovery.

Errors and Outcomes

The initial error rate caused by missing detection and recovery was assumed by me to be 59.5 samples per year but this error rate leads to an outcome of a wrong result of only 1 sample per year which may lead the hospital laboratory into a false sense of security, meaning that their current process may be flawed but not lead to customer complaints. Hence, one should exclude outcomes from the analysis, since the hospital laboratory can only control their detection and recovery rate as a means to control the outcome rate.

Making up examples is difficult but there are real problems

Validation should lead to a case where no errors are found, which may make one exclaim they have been forced to do something for which they already knew the outcome. However, consider the following real cases:

Detection – Detection was missed when organs of the wrong blood type were selected to be transplanted, the transplant occurred and the patient died (3). Detection – Airline pilots repeat air traffic controller orders to detect miscommunication. Yet, miscommunication detection failed and caused one of the largest air disasters ever (4). Recovery – It was detected that the wrong leg was scheduled to be amputated but the recovery (change the operating room schedules) failed. Not all operating room schedules were changed (5) and the wrong leg was amputated.

Hence, even though it might be hard to envision how things can go wrong, there are real cases where seemingly simple detection and recovery process steps have failed. Validation is suggested as a means to help to ensure that new or existing mitigations work – and should be considered as a tool to help with performing a FMEA on mitigations.

The quality of validation – Equivalent QC

CMS has proposed equivalent QC for clinical laboratories. In changing the QC process, CMS requires validation (of use of equivalent QC). I have commented on the inadequacy of this validation (see equivalent QC essay). This leaves the question of what is an adequate validation. In some cases, people conducting a FMEA might assume perfect detection and recovery. Some level of validation beyond this assumption is warranted but must one conduct experiments that contain thousands of samples to prove that rare events haven’t happened? This topic will be pursued in a future essay.

References

  1. Application of a Quality System Model for Laboratory Services; Approved Guideline—Second Edition GP26-A3 NCCLS 2004 Wayne, PA.
  2. Hahn GJ and Meeker WQ. Statistical intervals. A guide for practitioners. Wiley: New York, 1991, p. 104
  3. See http://www.cbsnews.com/stories/2003/02/18/health/main540907.shtml
  4. Fatal Words: Communication Clashes and Aircraft Crashes by Steven Cushing University of Chicago Press, 1997, Chicago, IL
  5. Scott D. Preventing medical mistakes. RN. 2000 Aug;63(8):60-4