Frequency of Medical Errors II – Where’s the Data?

May 17, 2006

In virtually any tutorial about quality improvement, one is likely to encounter something like Figure 1, which describes a “closed loop” process. The way this works is simple. One has a goal which one wishes to meet. One measures data appropriate to the goal. If the results from this measure fall short, one enters the “closed loop” where one revises the process and measures progress and continues this cycle until the goal is met. Then, one enters into a different phase, (not shown in Figure 1), where one ensures that the goal will continue to be met.

Two deficiencies in the patient safety movement are: 1) the lack of clear, quantitative goals; and 2) the data from which one can measure progress. A list of problems the way goals are often stated is available (1).

An interesting paper that appeared recently discuses wrong site surgery (2). Given the visibility of wrong site surgery, one notable aspect of this paper is that it is one of the few sources which has wrong site surgery rates. The wrong site surgery rate was 1 in 112,994, or 8.85 wrong site surgeries per million opportunities. To recall, a 6 sigma process has 3.4 errors per million opportunities, so this rate is about 5.8 sigma. The authors state that the rate is equivalent to an error occurrence once every 5 to 10 years. This corresponds to the lowest frequency ranking in the Veterans Administration scheme of an error occurrence once or less every 5 to 30 years (3).

Another interesting aspect of the paper is the discussion of the Universal Protocol, which is a series of steps incorporated into the surgical process and designed to prevent wrong site surgery. One of the conclusions of the paper is that the Universal Protocol does not prevent all wrong site surgeries. The Universal Protocol was implemented as the solution to prevent wrong site surgeries. The problem is that where one would hope that a process change might be sufficient to remedy an issue, often this is not the case. Thus, one must continue to collect data and to add remedies and or change existing ones until the goal has been met, or in other words, continue with the cycle shown in figure 1. So one criticism of the patient safety movement is the mandated, static nature of corrective actions. The dynamic nature implied in figure 1 seems to have been bypassed.

The authors lament that the public is likely to overreact to wrong site surgery relative to other surgical errors such as retained foreign bodies. There are several points to be made here.

In classifying the severity of an error, one must examine the effect of the error, which means looking at the consequences of downstream events connected to the error (often facilitated by using a fault tree). Based on the authors discussion from actual data, retained foreign bodies is a more severe error than wrong site surgery. This is somewhat of a surprise, but is understandable.

Given one has classified all error events for criticality (which is severity and frequency of occurrence), one has the means to construct a Pareto chart. Since organizations have limited resources and cannot fix all problems, based on the Pareto chart, retained foreign bodies is likely to be higher on the Pareto chart than wrong site surgery and deserves more attention.

Proposed process changes need to be evaluated with respect to cost and effectiveness. The “portfolio” of proposed process changes can be viewed as a decision analysis problem whereby the “basket” of process changes selected represent the largest cumulative reduction in medical errors (e.g., reduction in cost associated with medical errors) for the lowest cumulative cost. See the essay on preventability.

I discuss (4) a hypothetical case where two events have identical criticality with respect to patient safety but one is high profile and the other isn’t. Should the high profile event get more attention? The answer is yes, because besides patient safety, there are other error categories for which the high profile event will be more important, such as customer complaints, threat to accreditation, and threat to financial health.

There are other comments that could be made but perhaps the most important comment is that studies such as those conducted by these authors are extremely valuable and are the heart of figure 1; namely, examining error events and currently implemented corrective actions and deciding how to make further improvements.

References:

  1. Assay Development and Evaluation: A Manufacturer’s Perspective. Jan S. Krouwer, AACC Press, Washington DC, 2002. pp 33-44.
  2. Kwaan MR, Studdert DM, Zinner MJ, Gawande AA Incidence, patterns, and prevention of wrong-site surgery. Arch Surg. 2006;141:353-7; discussion 357-8, available at http://archsurg.ama-assn.org/cgi/content/full/141/4/353
  3. Healthcare Failure Mode and Effect Analysis (HFMEA) VA National Center for Patient Safety http://www.va.gov/ncps/SafetyTopics/HFMEA/HFMEAmaterials.pdf
  4. Managing risk in hospitals using integrated Fault Trees / FMECAs. Jan S. Krouwer, AACC Press, Washington DC, 2004. pp 17-18.

Not a member of the club

May 15, 2006

A Wall Street Journal article discusses the role of the New England Journal of Medicine in the Vioxx affair (1). An aspect of the article that caught my attention was the attempt by a pharmacist, Dr. Jennifer Hrachovec, to make known the dangers of Vioxx.

She first tried to do this during a radio call in show which had as one of its guests, Jeffrey Drazen, the editor of the New England Journal of Medicine. He blew off her comments.

She next submitted a Letter to the editor to the New England Journal of Medicine. It was rejected.

Finally, she was able to get a Letter published in JAMA, the Journal of the American Medical Association.

I can relate to this sequence of events and suggest that part of the problem is that however relevant and correct a person is on an issue, the person’s issue may not be taken seriously if that person is not “a member of the club”. Journals such as the New England Journal of Medicine have so many submissions that they are always looking for ways to reject papers. I suspect that one criteria used is simply the status of the person submitting the paper. Fortunately, Dr. Hrachovec persisted. For me, when someone blows off my comments, it is a source of motivation, and I have had my share of rejected Letters.

References

  1. Bitter Pill How the New England Journal Missed Warning Signs on Vioxx. David Armstrong Wall Street Journal, May 15, 2006, page A1.

 


Beware of the filter

May 14, 2006

Facilitators play an important role in quality activities. For example, they often lead training and brainstorming sessions. Brainstorming is a key part of FMEA (Failure Mode Effects Analysis) and fault trees. Whereas this blog entry is not meant to be a summary of what makes a good facilitator, I was recently reminded of a problem with some facilitators; namely that of the filter.

Filters are people who while serving as facilitators, feel compelled to have all information go through them. The filter then re-releases the information but in a changed form. That is, whatever was originally submitted to the filter becomes changed into a form that the facilitator understands (which may or may not be the same as the person who had the original idea). In some cases the facilitator changes the way the information is presented by rewriting it or restating it (e.g., out loud). The latter must be familiar as one often hears, “now let me make sure I understand what you’ve said, you mean that, …., “.

There is nothing wrong with the concept of a filter, since in principle, a filter could make ideas more clear and if nothing else ensures that an idea is understood as intended. Whereas this is often useful – sometimes essential – between two people, the danger is when the filter is used in a group setting and the filter makes ideas less clear, changes ideas, or omits ideas.

I recall a CLSI (formerly NCCLS) strategy session a few years ago. I had prepared a list of issues which the facilitator rewrote. My list had already been read by the head of the organization with a few minor changes so this rewrite by the facilitator seemed completely unnecessary and more importantly, it failed to capture the issues as clearly as I had and at the same time dropped some issues. So the strategy session took place without the right list of issues and during the strategy session, all material went through the facilitator, as in “now let me make sure I understand what you’ve said, you mean that, …., “. The facilitator of course also wrote up the results of the meeting. In all, this was a lost opportunity, largely driven by a filter.

So a better way is for the facilitator to assemble all ideas through a consensus process. The final product may have some editing for readability but without the effects of a filter.