The Joint Commission, which accredits hospitals, requires each member hospital to select at least one high risk process per year and perform proactive risk assessment on it (requirement LD.5.2). Typically, FMEA* is used to satisfy this requirement. The problems with this requirement are:
1. Everyone knows something about risk management (e.g., skiing down that slope is too risky) but few people know how to properly conduct a FMEA. It is unlikely and impractical to require every hospital to acquire this expertise.
2. To adequately perform a FMEA requires a significant effort besides having knowledge about FMEA techniques. Typically, one adds a fault tree to the FMEA and quantifies the fault tree. The two prior blog contributions describe issues when failing to quantify risks. To quantify risk of each process step requires data and modeling, not just a qualitative judgment.
3. It is unlikely that each hospital will obtain the commitment to adequately staff a risk management activity – moreover, one can question whether Joint Commission inspectors have the knowledge to adequately evaluate each hospital’s results.
4. All of the above will result in hospitals performing an activity to achieve a checkmark in a box, rather than actually reducing risk.
What makes more sense is to consider hospital processes as similar and to have standard groups perform a FMEA for each process. The results could then inform guidelines. This suggestion is also not without problems, which are listed as follows:
1. A lot of people hate guidelines, so acceptance may be difficult. Some will argue that each hospital is different. To counter this, one could suggest that the hospital start with the guideline and adapt it to their process. This would be a manageable task.
2. There is no guarantee that the guideline developed is the right one.
3. Any guideline cannot guarantee freedom from errors – the guidelines themselves may not be 100% effective. Moreover, guidelines may cause one to relax vigilance about errors as in – “we’re following the guideline”.
Examining an actual example, wrong site surgery has undergone a standards approach. The Joint Commission studied this error and came up with the Universal Protocol, which hospitals are required to follow. One issue is a report (1) that cites that in a set of hospitals there are on average 12 redundant checks to prevent wrong site surgery. This indicates that something has gone wrong. Perhaps, with quantification of risk, one could show that 12 checks is too many. The report also shows that the Universal Protocol would have been unable to prevent all wrong site surgeries (the study included surgeries performed before the Universal Protocol was required). This also highlights the need to maintain a FRACAS (Failure Reporting And Corrective Action System) to deal with observed errors. This too would benefit by being done nationally. The data collection part S.544 (2) is already law. What is needed is the complete FRACAS approach to this data.
1. Mary R. Kwaan, MD, MPH; David M. Studdert, LLB, ScD; Michael J. Zinner, MD; Atul A. Gawande, MD, MPH. Incidence, Patterns, and Prevention of Wrong-Site Surgery. Arch Surg. 2006;141:353-358. See: http://archsurg.ama-assn.org/cgi/content/full/141/4/353
*Dr. Krouwer has a software product for hospitals that performs FMEA. He is looking to move on from this project and will consider offers to buy the source code.