I was sorry to hear that Rick Miller passed away. I knew Rick since the late 70s when we were both at Technicon Instruments. Rick was in a quality role then, where he always put the interests of customers first. We both left Technicon, but I continued to see Rick at NCCLS (now CLSI) meetings. Rick was the chairholder of the CLSI subcommittee on uncertainty intervals. Rick asked me to become a member of that subcommittee, in spite of the fact that he knew I was against establishing uncertainty intervals for clinical laboratories using GUM (the ISO document on uncertainty intervals). He wanted me on the committee for my opposing opinions. I was impressed that he did this – it was the right thing to do, but not the easy thing to do. I think others would have taken the easy route – not Rick. I enjoyed being a part of that subcommittee, under Rick’s leadership – it was one of the most open subcommittee experiences I had, whereby the many different opinions were all allowed to be heard. It will be difficult to find a replacement for Rick on this subcommittee.
Given a regression equation for method comparison data, here are some simple things can one do, besides the usual. In what follows, Y is defined as the new method and X as the reference method.
If the slope is greater than 1.0 and the intercept greater than 0, Y will never equal X, when X is positive, Y will always be greater.
If the slope is less than 1.0 and the intercept less than 0, Y will never equal X, when X is positive, Y will always be less than X.
If in an Excel file, the slope is in cell A2 and the intercept in cell A3, the point where Y=X is given by the cell formula: = A3/(1-A2).
One can also prepare a table of biases for relevant regions. For example, for a sodium assay,
CAPA – Corrective Action Preventative Action
FRACAS – Failure Reporting And Corrective Action
FMEA – Failure Mode effects Analysis
Unlike FMEA, some (probably many) people have never heard of FRACAS. When I was explaining FRACAS to some people, someone said “oh that’s CAPA, we do that now.” Although CAPA and FRACAS share features, there are key differences.
Timing – FRACAS deals with products before release for sale, and CAPA with products after release for sale. FRACAS however, can also be continued after products are released.
Responsibility – In medical device companies, FRACAS is usually conducted by R&D and CAPA by service and manufacturing. Whereas, this may not sound like a big difference, it is. For example, service is more concerned with keeping customers happy than with corrective action.
Data source – In CAPA, there are two data sources, (customer) complaints and (manufacturing) nonconformities. This sets up the possibility for two CAPAs, which may not talk to each other; namely, a CAPA in manufacturing to deal with nonconformities and a CAPA by service to deal with customer complaints. In a FRACAS that is conducted before release for sale, the data source is “events”. An event is an observed action that has the potential to cause harm, increased cost, a return, a complaint, and so on. Note that not all events will lead to complaints. For example, a clinician may disregard an erroneous result and not complain about it.
Metrics – While anything is possible, the reliability growth management metrics associated with FRACAS are almost never used with CAPA.
Regulations – FDA requires medical device companies to have procedures in place to address nonconformities and complaints. This is traditionally handled by CAPA. There is no requirement for FRACAS.
To sum up, is CAPA the same as FRACAS? No, not by a long shot.
I had occasion – thanks to a helpful reference librarian at the Lamar Soutter Library at U. Mass. Medical School – to read an entire issue of Clinical Chemistry and Laboratory Medicine devoted to “laboratory medicine and patient safety.”
One of the first things that struck me is that so many articles started out a reference to the Institute of Medicine report on patient safety (1). Hmmm, seems like one of my articles started out this way too (2). I’m getting tired of the use of this reference – in most cases it just boilerplate so that’s ok – but sometimes it’s not. For example, in a section that follows the reference, Donaldson says (3)
“Many adverse event detection systems are embryonic, particularly in the effective analysis of risks and hazards.”
This makes one think that we are just getting started with tools and techniques to reduce preventable medical errors. This neglects the anesthesiology story.
Back in the 70s, anesthesiology had a high preventable medical error rate. Yet, without an Institute of Medicine report or regulations, a group at Massachusetts General Hospital studied why this error rate was so high (4-5), using techniques from aviation. So even 30 years ago, these techniques were not embryonic, they had just not been applied effectively to anesthesiology. Shortly after this initial work, prevention strategies were developed. The only outside event that occurred was a 20/20 television show about the dangers of anesthesiology that aired in 1982 and undoubtedly helped in more widespread implementation of the prevention strategies.
- Kohn LT, Corrigan JM, Donaldson MS, editors. To err is human: building a safer health system. Washington, DC: Institute of Medicine, National Academy Press, 2000.
- Krouwer JS Recommendation to treat continuous variable errors like attribute errors. Clin Chem Lab Med 2006;44(7):797–798.
- Donaldson L Foreword Clin Chem Lab Med 2007;45(6):697–699.
- See http://www.anesthesiology.org/pt/re/anes/fulltext.00000542-199604000-00025.htm;jsessionid=GKGJw17GTqY0NMY8mN6RndvWspLF7n2SstK4FbQr2w2xwF7wTyJh!-9948752!181195628!8091!-1
- Cooper JB, Newbower RS, Long CD, McPeek B: Preventable anesthesia mishaps: A study of human factors. ANESTHESIOLOGY 1978; 49:399-406. An online version of Paper 5 can be found at http://qshc.bmj.com/content/vol11/issue3/#CLASSIC_PAPERS