The last entry was about FMEA goals, yet, the word “goal” isn’t in ISO 14971. Maybe “goal” suffered the same fate as the word “mitigation” – banned from ISO. There is an implied goal in ISO 14971 – the residual risk must be acceptable. To recall, residual risk is the risk that remains after control measures have been taken. Here’s where things get a little tricky.
In cases where the residual risk is unacceptable, one is supposed to perform a risk benefit analysis to determine if benefits of the medical procedure performed by the device outweigh any possible residual risk.
To frame this discussion, consider two types of residual risk:
1. A residual risk from a known issue, such as an interference, where eliminating this risk is not “practical “
2. The overall residual risk from unknown issues. A certain amount of effort is used to search for risks (e.g., through FMEA, FTA, and FRACAS). At some point, more effort is considered not practical. Note: One can look at FDA recalls to see that unknown risks are often found in released products and lead to recalls (1).
Use of the word practical in ISO 14971 implies that in some cases, risk reduction is too expensive. This is not meant to be pejorative since everyone has limited resources.
In most cases in the standard, the cost benefit analysis is positioned as an analysis of the medical device’s clinical benefit to the patient vs. its risk. But ISO 14971 does point out an additional frame for the discussion.
“Those involved in making risk/benefit judgments have a responsibility to understand and take into account the technical, clinical, regulatory, economic, sociological and political context of their risk management decisions.”
To understand the issue, consider Type 1 diabetes as an example with the medical procedure being use of a home glucose meter. Because of risks 1 and 2 above, the glucose meter will fail and provide an erroneous result, albeit rarely. This is the current status and it is clear the benefit of the home glucose meter outweighs the risk (e.g., ADA recommendations to test for glucose). Yet, if one conducts a thought experiment and starts raising the frequency of (all) home glucose meter failures, simple decision analysis (2) still warrants use of the device. That is, measuring glucose, even if it occasionally (e.g., more often than rarely) gives an erroneous result, is better (clinically) than not measuring it.
If a company is working on a home glucose meter which provided an erroneous result too often (e.g., compared to existing meters), they will keep developing the meter until its failure rate is competitive. That is, there is a hierarchy of requirements for release for sale and often the competitive requirements (features needed to sell the product – including quality) are more stringent than any medical need or regulatory requirement (3).
Would you pay 2.5 million dollars to go to Cleveland?
Richard Fogoros suggests that there is a limit that we can spend for healthcare (4). To make this point, he says that if a plane could be built that could be survivable for most crashes, most people would not pay for an astronomical ticket price.
So regulators could require lower failure rates (less risk), causing companies to invest more, which would result in higher healthcare prices, but this is not done because it is unaffordable, hence the level of risk allowed is usually driven by competition. This is risk management but it is not the clinical benefit risk analysis described in ISO 14971– it is financial risk management.
2. Krouwer JS. Assay Development and Evaluation: A Manufacturer’s Perspective, AACC Press, Washington DC, 2002, Chapter 3.
3. Krouwer JS. Assay Development and Evaluation: A Manufacturer’s Perspective, AACC Press, Washington DC, 2002, pp 38-39.
4. Fogoros RN. Fixing American Healthcare. Publish or Perish Press, Pittsburgh, 2007.