FDA Classes

October 28, 2007

Bob had a comment about my previous FRACAS post, which reminds me of something. In his comment, he refers to FDA device classes and says that Class II devices do not require as much rigor. FDA classes can cause some confusion because there are two types of classes – device classes and recall classes.

Devices classes are: class I, class II, or class III. It is class III that requires the most data and can “present a potential, unreasonable risk of illness or injury.”

Recall classes are also class I, class II, or class III. It is class I that is the most dangerous type of incident and can “predictably could cause serious health problems or death.”Can one get a class I recall for anything other than a class III device? I don’t know the answer to this question but to a company, it is somewhat besides the point. Recalls are expensive, regardless of what device class they belong to or what the FDA requires for data and are to be avoided (e.g., using tools such as FRACAS).


Fixing American Healthcare – A Review

October 25, 2007

 

My review of this book is from the perspective of a healthcare consumer and also as consultant to the medical device industry – I have no expertise in healthcare economics. In fact, the topic itself was initially of no interest for me – I figure we’re all going to get screwed and so someone talking about net present values of capitation expenditures would be a real snoozer. However, in this day and age of blogs, I came across the Covert Rationing Blog and found myself repeatedly coming back to this blog. Dr. Fogoros, aka DrRich, has a clear and entertaining writing style and made this topic interesting on his blog, so I bought the book. I was not disappointed.

The organization of this book is well thought out. The first 50 or so pages (out of slightly over 300) function as a summary of much of the analysis, after which people can either abandon ship or read on. I found Dr. Fogoros’s GUTH – grand unification theory about healthcare – to be quite compelling and also easy to understand. GUTH divides healthcare in four quadrants, all four combinations of centralized vs. the individual, and low quality and high quality. In this summary part, there is description of an investor session from 2000 which Dr. Fogoros attended. Here, Jim Clark (founder of Netscape) discussed his then latest venture – WebMD. I could have benefitted from Dr. Fogoros’s insight as to why WebMD would fail in its original concept, as I was one of the naive investors (fortunately only dabbling in this one). Simplifying insurers’  transaction costs and procedures was Jim Clark’s pitch, but the insurers did not want this simplification as their goal was to take money in but make it as complicated as possible to pay out for claims.

In the rest of the book, Dr. Fogoros supplies more details. What is so compelling to me is that when Dr. Fogoros exposes the forces at play, everything falls into place. There are no evil people, just people doing what they do best within the rules of society. So a football player that smashes his opponent on the field is cheered – off the field, the same behavior would land him in jail. In this book, the relevant players are like football players making hits on the field – they are not portrayed as evil.

Some of the discussions that were of interest: everything about money, the whole idea of covert (vs. open) healthcare rationing, the principle (that America refuses to abandon) that there can be no limits to healthcare, the destruction of the doctor patient relationship, the history and way HMOs work, why eliminating fraud won’t solve the healthcare cost problem, randomized clinical trials.

Two major groups are discussed as trying to control healthcare – the “Gekkonians” –who believe that market forces will reduce cost and the “Wonkonians” – who believe regulation can lower cost, largely by decreasing fraud.

Dr Fogoros has an engaging writing style. It is as if he is telling us a story, subtle humor is present  but the book is not a joke-a-thon.  One example – to illustrate the importance of cost in solutions, he says that one could do a lot more to make a plane crash survivable, but would you pay 2.5 million dollars for a ticket to Cleveland. Dr. Fogoros relays a chilling account of his own run-in with regulators, an experience that would make most people think of retirement. Thankfully for us, one reaction of his was to become an expert in the topic and write this book.

My somewhat cynical view of healthcare insurance has been that you pay expensive premiums for many years, at some time develop a serious illness, and then your policy is abruptly cancelled. Does Fixing American Healthcare simply play to my previous bias? Perhaps, but one should know that I complain about everything I encounter if I find a problem. Often, these complaints are published and thus, they are peer reviewed complaints about peer reviewed articles (the one that I am most proud of refers to the most cited publication in The Lancet). I do complain about a point made in Fixing American Healthcare. But it is a tiny point and does not detract from the main message of the book.

One of the values of this book is that it espouses the values of transparency, and just as importantly explains healthcare so that it is transparent. Transparency is the enemy of those with hidden agendas. I remember the resistance to unit pricing in food stores – some characterized it as too confusing, but its value was simplifying things.

Of course, for Dr. Fogoros to point out problems is important, but what one also wants is proposed solutions. There is a preview of the solution in the section on clinical trials – openly ration healthcare and provide services to those who need it most. As one gets into these final sections about solutions, everything made sense to me, but I must admit, I need to reread these sections and since this will take some time, I thought it was important to provide this partial review, because this book is so important.

Overall, this book is fabulous and I learned a lot. It deserves to sell out of its first printing. For subsequent printings, ok, one final complaint – larger print would be nice.


10/21/2007 FRACAS? – Never heard of it

October 21, 2007

I just got back from co-presenting a short course on medical device software verification and validation for AAMI. One of the topics that I discussed was the use of FRACAS to improve software reliability.

One of the first questions I asked was, has anyone heard of FRACAS? Only one person raised their hand.

I also asked – has anyone ever heard of software reliability growth management? Again, only one person raised their hand – in this case a different person. The rest of this entry is to try to explain these results.

Google returned 38,900 hits for the phrase “software reliability growth.” I assume that adding the word management did not make a difference. So people who have the responsibility to validate software for medical devices have not heard of (at least in this sample) a technique that is used by some and is written about. It’s not surprising that a similar result was found for FRACAS, which is really used to reduce errors from all sources – not just software. Here are my reasons regarding the lack of knowledge about FRACAS in the medical device industry.

1.       FRACAS is not required by the FDA. We live in a regulated world, where often, the prime quality goal of an organization is to stay out of trouble with regulators. This is an understandable goal and makes sense – the problem is that other important goals may be neglected and quality practices may be limited to those proscribed by the FDA. Product recalls, including those that have caused harm occur for approved products and not only at companies that get warning letters.

2.       Whereas reliability techniques associated  with preventing potential errors (FMEA) and preventing recurrence of observed errors (FRACAS) are both used in military programs, only FMEA seems to have made it into healthcare. In this course, most people raised their hand when asked – have you heard of FMEA? My take on this is that there is a bias towards FMEA, because it is associated with preventing potential errors. The notion that one can get anything useful by observing errors has been overlooked.

3.       This failure to recognize what has proved useful elsewhere (such as the defense industry) is perpetuated by various groups. For example, if one looks through the 2007 version of the ISO 14971 standard on risk management, there is not a single reference to FRACAS. The same results were found, using “Search” for websites for the Institute of Healthcare Improvement, National Quality Forum, and Leapfrog. Even using CAPA as a search term, yielded no results.

It’s time to realize that during product development, observing errors and implementing corrective actions all before product release is a form of risk management.


10/21/2007 – Near Miss

October 21, 2007

 

William Marella writes about near misses in Patient Safety and Healthcare.  Much of what says makes sense but overall, the article itself is a near miss. Here’s why.

Mr. Marella reports that most hospitals follow regulators’ recommendations about reporting only about adverse events and not near misses. To understand the problem with this (beyond what Mr. Marella discusses), let’s look at FRACAS (Failure Reporting And Corrective Action System). With FRACAS, the steps are as follows (I’ve added emphasis as italics):

1.       Observe and report on all errors.

2.       Classify each error as to its severity and frequency of occurrence.

3.       Construct a Pareto chart.

4.       Implement corrective actions for the items at the top of the Pareto chart.

5.       Measure progress as an overall (e.g., combined) error rate.

6.       Continue steps 1-5 until the error rate goal is met.

So an immediate problem with what’s being done is that step #3 – constructing a Pareto chart is being handed down from regulators – and one can question the origin of this Pareto. Moreover, as Mr. Marella correctly points out, this Pareto chart is about adverse outcomes, not events in the process. To understand why this is a problem, consider the following chart about errors:

 

When errors occur, there is an opportunity for them to be detected. If detection (and recovery) are successful, a more serious error event has been prevented. So in this chart, error event A when either undetected or with successful detection and a failed recovery leads to error event B and if the same steps occur, error event B leads to error event C with each higher letter having a more severe consequence. As a real example of this, there was the national news story of the Mexican teenage girl who came to the US for a heart lungs transplant. Organs of the wrong blood type were selected (error event A) – this error was undetected and these unsuitable organs were transplanted (error event B). The correct reason that the patient’s health declined was detected but the recovery failed and the patient died (error event C).

Let’s consider detection in more detail. In planned detection, a (detection) step is part of the process. So, in a clinical laboratory, a specimen is examined to see if its adequate. For example, a serum sample that is red has been hemolyzed and will give an erroneous potassium result, so detection results in this sample not being analyzed – at least not for potassium. This causes a “delayed result” error rather than sending an erroneous result to clinician, which is more serious. Typically, detection steps are optimized so that it is more or less guaranteed so that they will be effective. In some cases, people have gone overboard – in one report, the average number of detection steps to assess if the surgery site is correct is 12 – this is too many.

However, a salient feature of a near miss is accidental detection. This unplanned detection signifies that there is a problem with the process that requires correction. There is of course no guarantee that accidental detection will occur the next time and it is likely that it won’t occur, so typically, when accidental detection occurs, severity is associated with the more serious event, as if the detection did not occur. The corrective action may be to create a planned detection step or to make other changes to the process. This also points out the problem with regulators constructing their own Pareto. By not collecting all errors and then classifying them, high severity errors (near misses) will be neglected. So basically, steps #1 and #2 in a FRACAS have been omitted.

Another problem, is the lack of constructing an overall metric and measuring it.

Some things to know about error rates

  1. One should track only one (or in some cases a few) error rates.
  2. The (overall) error rate goal should not be zero.
  3. Resources are limited. One can only implement a limited number of mitigations.

The National Quality Forum (NQF) has identified 28 adverse events to be tracked, the so called “never events”. There is no way that one can establish allowable rates for each of these events and a “never event” implies an allowable rate of zero, which is meaningless. For those who have a problem with a zero error rate, one must understand, one is working with probabilities. For example, say one must have a blood gas result. Assume that one knows that the failure rate of a blood gas instrument is on average, once every 3 months, and when it fails, the blood gas system will be unavailable for one day. Say this failure rate is too frequent. One can address this by having 2, 3, or as many blood gas instruments as one wants – or can afford – with failure now occurring only when all blood gas instruments fail simultaneously. But no matter how many blood gas instruments one has, the estimated rate of failure is never zero, although it can be made low enough to be acceptable and perhaps so low that it can be assumed “never” to occur – although there is a big difference between the “never” used by the NQF and the estimated probability of failure. In fact, the difference between a calculated rate that is greater than zero but possible to occur in a one’s lifetime and a calculated rate that translates to “never” could be a substantial difference in cost. The blood gas example uses redundancy to prevent error. The wrong site surgery example above uses detection, which is of course much cheaper than buying additional instruments. Each mitigation has its own cost. Computer physician order entry is an expensive mitigation to prevent medication errors due to illegible handwriting. Financially, all of this reduces to a kind of portfolio analysis. One must select from a basket of mitigations an optimal set to achieve the lowest possible overall error rate at an affordable cost.

This (portfolio) analysis only makes sense if one is combining errors. If error A causes patient death or serious injury and error B does the same, and there are many more such events, one can combine these errors to arrive at a single error rate for all error events that cause patient death or serious injury. This is similar to financial analysis, whereby there is one “bottom line”, the profitability of the overall business – individual product lines are combined to arrive a one number.


The “Axiom of Industry” applied to healthcare

October 14, 2007

One of the most interesting blogs that I have come across is the Covert Rationing Blog. The author, DrRich (Richard N. Fogoros, MD) has written a book, “Fixing American Healthcare”, which I am in the process of reading. So far, it is a fabulous book, and I am learning a lot. I did take exception to a point that was made on DrRich’s blog and follow up on that here, based on getting to that section in his book.

His “axiom of industry” is that standardization of an industrial process reduces cost and improves outcomes. This industrial  idea is being applied to healthcare. DrRich gives a example where standardization applied to healthcare works (hip replacement) and where it doesn’t work (congestive heart failure – CHF). The reasons he provides – although not exactly so stated – are that for hip replacement, one has a high state of knowledge, and for CHF, one has an intermediate state of knowledge and when the state of knowledge is not high enough, standardization will not work.

This is where DrRich needs to continue with his industrial analogy. There are many processes in industry with a high state of knowledge as well as processes with an intermediate state of knowledge. Yes, in industry, one standardizes processes with a high state of knowledge, but this does not happen when the state of knowledge is inadequate.  Here, one uses a variety of approaches, including trial and error; that is, observing errors and then applying corrective actions. FRACAS (Failure Reporting And Corrective Action System) is a formal name for this method and believe it or not the acronym TAAF (Test Analyze And Fix) is also used. Whereas observing errors and then fixing them is not often admitted by quality managers as the method used, it is at times the best method to improve a process.

In healthcare, this method is often used as well. As patients, we are aware of the physician saying, let’s try treatment XYZ and see what happens, implying that if the treatment doesn’t work (an incorrect treatment decision) another treatment will be tried. If this actually happens and the second treatment works, one might not be happy but it is possible that the physician nevertheless followed a reasonable course of action. Moreover, for a disease condition one is not always in a “standardization” or “trial and error” situation. One often uses a mixture of the two. And, there is always the possibility that the state of knowledge for a disease may increase at some point to allow for standardization. I previously commented that standardization of a process that is not ready is likely to lock in unknown errors.

The other point that DrRich makes is that patients are not widgets. The implication is a little ominous here, namely; that morally deficient industrial managers given the chance, would discard patients as readily as widgets. I commented before, that one is optimizing a process – the correct analogy is to throw out an incorrect treatment – not a patient. Moreover, widgets are usually thought of as low cost items. No one considers a patient as low value. So here the analogy must be between patients and high cost widgets (of which there are many). In industry, as in medicine, loosing (discarding) a high cost item is not good.

One needs to ask, how many medical conditions are amenable to standardization (e.g., have a high state of knowledge). Covert rationing may well be responsible for patients being treated as widgets, including misapplying industrial processes, but these processes themselves can be applied to healthcare to benefit patients, although they will not solve healthcare costs.


The problem with Joint Commission requirement to perform a FMEA and a suggestion on how to fix it

October 5, 2007

The Joint Commission, which accredits hospitals, requires each member hospital to select at least one high risk process per year and perform proactive risk assessment on it (requirement LD.5.2). Typically, FMEA is used to satisfy this requirement. The problems with this requirement are:

  1. Everyone knows something about risk management (e.g., skiing down that slope is too risky) but few people know how to properly conduct a FMEA. It is unlikely and impractical to require every hospital to acquire this expertise.
  2. To adequately perform a FMEA requires a significant effort besides having knowledge about FMEA techniques. Typically, one adds a fault tree to the FMEA and quantifies the fault tree. The two prior blog contributions describe issues when failing to quantify risks. To quantify risk of each process step requires data and modeling, not just a qualitative judgment.
  3. It is unlikely that each hospital will obtain the commitment to adequately staff a risk management activity – moreover, one can question whether Joint Commission inspectors have the knowledge to adequately evaluate each hospital’s results.
  4. All of the above will result in hospitals performing an activity to achieve a checkmark in a box, rather than actually reducing risk.

What makes more sense is to consider hospital processes as similar and to have standard groups perform a FMEA for each process. The results could then inform guidelines. This suggestion is also not without problems, which are listed as follows:

  1. A lot of people hate guidelines, so acceptance may be difficult. Some will argue that each hospital is different. To counter this, one could suggest that the hospital start with the guideline and adapt it to their process. This would be a manageable task.
  2. There is no guarantee that the guideline developed is the right one.
  3. Any guideline cannot guarantee freedom from errors – the guidelines themselves may not be 100% effective. Moreover, guidelines may cause one to relax vigilance about errors as in – “we’re following the guideline”.

Examining an actual example, wrong site surgery has undergone a standards approach. The Joint Commission studied this error and came up with the Universal Protocol, which hospitals are required to follow. One issue is a report (1) that cites that in a set of hospitals there are on average 12 redundant checks to prevent wrong site surgery. This indicates that something has gone wrong. Perhaps, with quantification of risk, one could show that 12 checks is too many. The report also shows that the Universal Protocol would have been unable to prevent all wrong site surgeries (the study included surgeries performed before the Universal Protocol was required). This also highlights the need to maintain a FRACAS (Failure Reporting And Corrective Action System) to deal with observed errors. This too would benefit by being done nationally. The data collection part S.544 (2) is already law. What is needed is the complete FRACAS approach to this data.

References

  1. Mary R. Kwaan, MD, MPH; David M. Studdert, LLB, ScD; Michael J. Zinner, MD; Atul A. Gawande, MD, MPH. Incidence, Patterns, and Prevention of Wrong-Site Surgery. Arch Surg.2006;141:353-358. See: http://archsurg.ama-assn.org/cgi/content/full/141/4/353
  2. See: http://frwebgate.access.gpo.gov/cgi-bin/getdoc.cgi?dbname=109_cong_bills&docid=f:s544enr.txt.pdf

 


Medical Error FMEA Risk Grids – why they are a problem II

October 3, 2007

This blog entry summarizes the previous entry, Medical Error FMEA Risk Grids – why they are a problem.

1.       Risk grids are presented whereby each cell is severity by probability of occurrence.

2.       In the VA risk grid, the remote by catastrophic entry is problematic because, the remote definition is not infrequent enough (when coupled with catastrophic events) and this cell’s risk is labeled as “ok.”

a.       Although this would be solved by adding another probability of occurrence row with a lower probability of occurrence, the problem would still remain if one does not quantify probabilities*.

3.       The risk grids are often called semi-quantitative.  This is not really true as often, no measurements or data are taken to justify the location of events with respect to probability of occurrence.

4.       No matter how many mitigations are put in place, the risk of an adverse event is never zero.

a.       But one can lower the risk through mitigations so that the likelihood of occurrence is so low that it is acceptable. Hence, there must always be an “ok” cell, even for catastrophic events. In any case, one can’t keep on adding mitigations forever, because resources are limited.

5.       Without quantifying probability of occurrence, one is in danger of accepting risk as “ok” when it is not low enough.

6.       Quantifying probabilities for all events within a process is a massive amount of work.

*Example of not quantifying probabilities. At a FMEA meeting, regarding a specific event, “I think the likelihood of that event is going to be real low. Everyone agree?, … Yeah”