Risk Management and the Obstructionists

December 30, 2010

I have been peripherally involved in a CLSI standard about risk management, which IMHO is floundering. My suggestion to right things by simply using standard FMEA and FRACAS methods was rebuffed as too complicated and that quantifying error rates wasn’t needed. It was suggested that taking “baby steps” are needed.

Well there’s nothing more fundamental – I wouldn’t call them baby steps – than measuring an error rate. Without quantifying where you are, there’s no way to know when you’re done or how effective your improvement program is.

And rather than trying to measure 27 different error rates, one can combine all errors into one rate by classifying each error type as to its severity. This is how FRACAS works:

  1. Observe and classify errors
  2. Measure the error rate
  3. Rank errors using a Pareto
  4. Propose fixes for the errors at the top of the Pareto
  5. Go back to step 1

And these steps are not complicated!


Techniques to improve results, not documentation improves quality

December 20, 2010

I get to vote on GP26, CLSI’s document on a quality management system. Rejecting GP26 is like voting to reject apple pie, so here are my reasons.

GP26 is similar to ISO 9001 and ISO 15189. The latter ISO standard is being used to accredit laboratories. GP26 had a bad start – the driving force behind GP26 was a person from the Abbott Quality Institute and this person’s pitch was made at CLSI a year before FDA delivered to Abbott the biggest IVD fine at the time for lack of quality.

Any CLSI document should describe the history of the procedure and any criticisms. GP26 does not do this. Since GP26 follows ISO 9001, the ample criticism of ISO 9001 should be discussed, such as comments by John Seddon  or me (Krouwer JS. ISO 9001 has had no effect on quality in the in-vitro medical diagnostics industry. Accred. Qual. Assur. 2004;9:39-43).

ISO 9001 and ISO 15189 focus on documentation rather than results. To its credit, GP26 does have a section about results.  But although “quality goals” are mentioned several times, there is no example of what these goals might be. If there were a specific goal, such as an overall error rate where errors were classified as to severity and frequency of occurrence then one could use the observed errors to select quality indicators rather than have laboratory management select them from a long list as suggested by GP26.

And there is no mention of FMEA in GP26.


Safety – what to measure

December 2, 2010

Reading comments to a blog entry http://www.medrants.com/archives/5953 prompts me to comment. For discussion purposes, consider the following four box table.

  Patient suffers harm Patient doesn’t suffer harm
Preventable medical error made  XXXXXXXXXXXXXXX  
No preventable medical error made    

 

As an example, consider placing a central line, with the medical error being failure to wash hands, and the harm being an infection. One could focus on the “Patient suffers harm” column and try to find cases where a preventable medical error was made (red X’s). But, there are two problems.

  1. Failure to wash hands may not have caused the infection
  2. The infection may be unrelated to a preventable medical error.

The problem is focusing on outcomes, especially since bad outcomes are common in hospitals (patients die).

One should focus on errors not outcomes. In the case of central line infections, this is rather easy if one is following the checklist recommended by Pronovost, because the process of placing a central line is monitored for errors.

Also, note that in the original use of the checklist for central line infections, for the “before” case, about 30% of the time one of the 5 checklist items was not carried out and the central line infection rate was about 10%.

To focus on errors, one must describe the process, say what is and is not an error, and observe the process each time. Each of these steps has a spectrum of difficulty and since there are so many processes, the overall program of improving safety is challenging. But it has been done before – the rate of anesthesiology errors (and bad outcomes) was improved in the 70s and 80s.