A pedigree of approaches to reduce error

June 15, 2007

James T. Reason is a psychologist who has been working in the field of human error for many years (1-2). My background comes from studying and using defense industry reliability tools such as fault trees, FMEA, FRACAS, and reliability growth management. Reason has influenced others which can be seen in articles and standards (3-6). They incorporate ideas from cognitive psychology.

Some comments of the Reason paper

Reason uses the “Swiss cheese” error model (this has made it into the CLSI standard GP32). Here, holes (e.g., errors ) in a hunk of Swiss cheese need to line up before an actual error event is observed. Yet, this can also be represented by a fault tree, where the errors are events connected by an AND gate. The probability of an actual error event (the parent of the AND events) can be calculated by combining the probabilities of the individual (child) error events. While perhaps less colorful than the Swiss cheese model, the fault tree is more amenable to actually estimating error rates.

Reason also uses the terms active and latent errors. If one reads the Reason article, one gets the point. However, this concept is abstracted by other authors in a confusing way. For example, latent errors are defined in GP32 as “less apparent failures of organization or design that contributed to the occurrence of errors or allowed them to cause harm to patients.” Since one would logically fix apparent errors, this definition seems to make virtually all errors “latent.” Moreover, in references 3 and 6, these authors use latent as an error classification. In FRACAS, there are two important ways to classify observed errors; by severity and frequency of occurrence. Other classifications are secondary albeit often useful, but classifying errors as latent is simply too confusing.

The terms cognitive error and non cognitive error are quite useful, even though they will be blank for non human errors. Non cognitive errors are usually considered to be non preventable which imply detection recovery schemes to prevent the effects of such errors from occurring. Cognitive errors are usually considered to be preventable which implies control measures such as better training.

Some comments of the Astion paper

The work by Astion et al. on laboratory medical errors (3) contains a wealth of information, yet some aspects of their paper prompt me to comment.

One gets the impression that a fundamental way to prevent adverse effects has been missed in the paper. This is inferred from the definition of adverse and potential adverse events.

“A potential adverse event was defined as an error or incident that produced no injury but had the clear potential to do so. Such errors may have been intercepted before producing harm or may have reached the patient but, by good fortune, produced no injury.”

The second sentence has the key words that errors may have been intercepted (by good fortune) before producing harm. This seems to ignore the hierarchical relationship of events as expressed in a fault tree. That is, higher level events are effects and lower level events are causes. For many lower level events (causes) that occur (e.g. errors), there is a process step designed to detect this error and recover from it, thereby preventing the higher level adverse event. (See transplant example below.)

One can also question the usefulness of the classification of adverse events vs. potential adverse events. The clinical laboratory is removed from the patient. In almost all cases, all patient harm related to laboratory errors start out as potential adverse events – whether they become actual adverse events often result from circumstances outside of the control of the clinical laboratory.

Preventability is defined as

“A preventable problem was considered an error that was reasonably avoidable, in which the error was a mistake in performance or thought. Preventability was scored on a scale of 1, definitely not preventable, to 5, definitely preventable. A score of 3 or more indicated a preventable incident.”

It would seem that preventability is synonymous with whether the error is cognitive or non cognitive. This also neglects the fault tree model. It is not clear from the Astion et al. definition as to which events are considered for preventability. As an example, consider a chain of two events:

  • the wrong blood type organ is selected for transplantation
  • the wrong blood type organ is transplanted

Error 1 as a cause could be considered as non preventable as a “slip” or non cognitive error. However, the effect of error 1, which is event 2 can be prevented by instituting checks to detect error 1 and recover from it. Explicitly calling out the chain of events adds clarity, especially since detection recovery sequences are different from preventing errors, as detection recovery sequences prevent the effects of errors.

Another issue is that preventability is viewed in terms of preventing the occurrence of events without explicit reference to the control measure that would be used. Yet, the choice of the mitigation or control measure is key as it defines the expected effectiveness and cost of the control measure. As an example, assume that a medication error is caused by a pharmacist misreading difficult to read handwriting from a physician. One can envision at least two possible control measures:

  • a process step to call the physician if the pharmacist is in doubt
  • institute an CPOE (computer physician order entry) system

Control measure 1 could be questioned about its effectiveness, but is low cost. Control measure 2 is highly effective but high cost. Control measure 2, as any measure that is a cost burden, depends on the financial status of the institution. Thus, any preventability ranking needs to take into account the specific control measure intended. The authors’ Table 7 (preventability) lists error causes, but the likelihood of preventability should refer to control measures, none of which are listed.

This work has much of the lion share’s effort of a FRACAS. That is, actually collecting all of the error events is often the most difficult step. Using the data in a more traditional FRACAS would have been an improvement. Thus, table 2 is the severity and frequency of occurrence table, but is not labeled as such, nor is there a severity ranking. There is also no criticality (severity x frequency of occurrence) and no Pareto chart. A problem with the other tables is that they should really not appear as independent tables. Take cognitive and non cognitive errors. One does not want to know the overall split between these two items, one wants to know this split for each of the severity categories in table 2. Given this missing constraint, the classifications:

phase of laboratory testing cognitive vs. non cognitive error responsibility for the incident

are all valuable to help focus corrective action resources.

The bottom line

Here’s what needs to be done. Consider for a moment what a business does. They collect and analyze a lot of data, but all businesses have one (and only) one key number that they report – profitability. All data analysis is used by the business to inform decisions as to how to improve profitability.

There’s nothing like that in these references. What one needs is an overate rate of patient safety errors. This could be one rate if can combine (by a weighting scheme) high, moderate, and low risk errors. If not, then one would have three rates – those of high risk errors, moderate risk errors, and low risk errors. Everything else – all analyses, corrective actions, and so on should be geared towards reducing the error rates to acceptable levels. This last statement means that the clinical laboratory requires goals for each error rate. There is no mention of goals in these references.

It is possible to have non patient safety errors as well. These could be ranked for severity as well and have their own importance. Thus errors for accreditation or cost are also important.

References

  1. Reason JT. Human Error. New York, NY: Cambridge University Press; 1990
  2. Reason J. Education and debate Human error: models and management. BMJ 2000;320:768-770 available at http://www.bmj.com/cgi/content/full/320/7237/768
  3. Classifying Laboratory Incident Reports to Identify Problems That Jeopardize Patient Safety Michael L. Astion, MD, PhD, Kaveh G. Shojania, MD, Tim R. Hamill, MD, Sara Kim, PhD, Valerie L. Ng, MD Am J Clin Pathol 120(1):18-26, 2003 available at http://www.medscape.com/viewarticle/458299
  4. CLSI GP32 Management of Nonconforming Laboratory Events; Proposed Guideline
  5. ISO TC212 22367 Technical Report: Medical laboratories — Reduction of error through risk management and continual improvement.
  6. Carraro, P and Plebani, M Errors in a Stat Laboratory: Changes in Type and Frequency since 1996 2007;53: . This is coming out in June.
  7. Krouwer JS. Using a Learning Curve Approach to Reduce Laboratory Error, Accred. Qual. Assur., 7: 461-467 (2002).