10/21/2007 – Near Miss

 

William Marella writes about near misses in Patient Safety and Healthcare.  Much of what says makes sense but overall, the article itself is a near miss. Here’s why.

Mr. Marella reports that most hospitals follow regulators’ recommendations about reporting only about adverse events and not near misses. To understand the problem with this (beyond what Mr. Marella discusses), let’s look at FRACAS (Failure Reporting And Corrective Action System). With FRACAS, the steps are as follows (I’ve added emphasis as italics):

1.       Observe and report on all errors.

2.       Classify each error as to its severity and frequency of occurrence.

3.       Construct a Pareto chart.

4.       Implement corrective actions for the items at the top of the Pareto chart.

5.       Measure progress as an overall (e.g., combined) error rate.

6.       Continue steps 1-5 until the error rate goal is met.

So an immediate problem with what’s being done is that step #3 – constructing a Pareto chart is being handed down from regulators – and one can question the origin of this Pareto. Moreover, as Mr. Marella correctly points out, this Pareto chart is about adverse outcomes, not events in the process. To understand why this is a problem, consider the following chart about errors:

 

When errors occur, there is an opportunity for them to be detected. If detection (and recovery) are successful, a more serious error event has been prevented. So in this chart, error event A when either undetected or with successful detection and a failed recovery leads to error event B and if the same steps occur, error event B leads to error event C with each higher letter having a more severe consequence. As a real example of this, there was the national news story of the Mexican teenage girl who came to the US for a heart lungs transplant. Organs of the wrong blood type were selected (error event A) – this error was undetected and these unsuitable organs were transplanted (error event B). The correct reason that the patient’s health declined was detected but the recovery failed and the patient died (error event C).

Let’s consider detection in more detail. In planned detection, a (detection) step is part of the process. So, in a clinical laboratory, a specimen is examined to see if its adequate. For example, a serum sample that is red has been hemolyzed and will give an erroneous potassium result, so detection results in this sample not being analyzed – at least not for potassium. This causes a “delayed result” error rather than sending an erroneous result to clinician, which is more serious. Typically, detection steps are optimized so that it is more or less guaranteed so that they will be effective. In some cases, people have gone overboard – in one report, the average number of detection steps to assess if the surgery site is correct is 12 – this is too many.

However, a salient feature of a near miss is accidental detection. This unplanned detection signifies that there is a problem with the process that requires correction. There is of course no guarantee that accidental detection will occur the next time and it is likely that it won’t occur, so typically, when accidental detection occurs, severity is associated with the more serious event, as if the detection did not occur. The corrective action may be to create a planned detection step or to make other changes to the process. This also points out the problem with regulators constructing their own Pareto. By not collecting all errors and then classifying them, high severity errors (near misses) will be neglected. So basically, steps #1 and #2 in a FRACAS have been omitted.

Another problem, is the lack of constructing an overall metric and measuring it.

Some things to know about error rates

  1. One should track only one (or in some cases a few) error rates.
  2. The (overall) error rate goal should not be zero.
  3. Resources are limited. One can only implement a limited number of mitigations.

The National Quality Forum (NQF) has identified 28 adverse events to be tracked, the so called “never events”. There is no way that one can establish allowable rates for each of these events and a “never event” implies an allowable rate of zero, which is meaningless. For those who have a problem with a zero error rate, one must understand, one is working with probabilities. For example, say one must have a blood gas result. Assume that one knows that the failure rate of a blood gas instrument is on average, once every 3 months, and when it fails, the blood gas system will be unavailable for one day. Say this failure rate is too frequent. One can address this by having 2, 3, or as many blood gas instruments as one wants – or can afford – with failure now occurring only when all blood gas instruments fail simultaneously. But no matter how many blood gas instruments one has, the estimated rate of failure is never zero, although it can be made low enough to be acceptable and perhaps so low that it can be assumed “never” to occur – although there is a big difference between the “never” used by the NQF and the estimated probability of failure. In fact, the difference between a calculated rate that is greater than zero but possible to occur in a one’s lifetime and a calculated rate that translates to “never” could be a substantial difference in cost. The blood gas example uses redundancy to prevent error. The wrong site surgery example above uses detection, which is of course much cheaper than buying additional instruments. Each mitigation has its own cost. Computer physician order entry is an expensive mitigation to prevent medication errors due to illegible handwriting. Financially, all of this reduces to a kind of portfolio analysis. One must select from a basket of mitigations an optimal set to achieve the lowest possible overall error rate at an affordable cost.

This (portfolio) analysis only makes sense if one is combining errors. If error A causes patient death or serious injury and error B does the same, and there are many more such events, one can combine these errors to arrive at a single error rate for all error events that cause patient death or serious injury. This is similar to financial analysis, whereby there is one “bottom line”, the profitability of the overall business – individual product lines are combined to arrive a one number.

Advertisements

One Response to 10/21/2007 – Near Miss

  1. […] 14971 describes a mitigation* as either a way to prevent or detect an error. ISO fails to include recovery (5), which is a serious […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: