ISO 14971 authors, expertise, and potential conflicts of interest

November 28, 2007

question

I have questioned the elevated status of ISO standards claimed by some. Often, people justify this status by asserting that ISO standards are prepared by a consensus of experts. This entry explores three topics related to this assertion:

·        ISO authorship

·        Expertise of authors

·        Potential conflicts of interest for authors

The membership of an ISO committee

If you have an ISO document – I have the latest version of ISO 14971 – one thing to notice is that there is no list of authors nor even a list of the committee members. I don’t understand why it is the policy of ISO to hide this information, nor could I find such an explanation (or list of members).

Note that CLSI (formerly NCCLS) has in each standard a list of authors and subcommittee members, advisors, and observers (as well as area committee members).

What does it take to be an expert?

A simple if not flip answer to this is to be on an ISO committee, since by assertion, all committee members are experts. Of course, for ISO committees, one cannot form an opinion, since membership is unknown outside of the committee.

Potential conflicts of interest

Here are some opinions about conflict of interest regarding ISO membership (given that I don’t have a clue who the authors are). To understand conflict of interest concerns, it is helpful to understand that ISO documents have quasi regulatory status. As such, organizations can be divided into two groups: regulatory providers, and regulatory consumers (see http://krouwerconsulting.com/Essays/StandardsGroups.htm)

Manufacturers – The membership from this (regulatory consumer) group is often filled with regulatory affairs professionals. Their potential conflict of interest is to shape the documents to favor ease of compliance. They favor horizontal over vertical documents (see http://krouwerconsulting.com/Essays/StandardsGroups.htm)

Clinical laboratory or hospital professionals – Although this group would not seem to have a vested interest, one can question, how many of these people serve as consultants for industry. If a standard is written for the clinical laboratory or elsewhere in the hospital than this group has the same regulatory consumer potential conflict of interest as the manufacturer.

Regulators – As a regulatory provider group, the potential conflict of interest is the healthcare economics policy in place by the current administration.

Consultants – This group often has a high potential conflict of interest since some consultants make their living by helping companies comply with ISO standards.

Trade associations – This group is the voice of manufacturers and if represented on a ISO group has the same potential conflict of interest as for manufacturers, but with the added concern that trade groups are skilled in organizing manufacturers.

Note that for CLSI, any prospective member must fill out a conflict of interest statement. I am unaware of anyone ever being turned away from membership due to the conflict of interest statements.


ISO 14971 and Residual Risk

November 21, 2007

competition

The last entry was about FMEA goals, yet, the word “goal” isn’t in ISO 14971. Maybe “goal” suffered the same fate as the word “mitigation” – banned from ISO. There is an implied goal in ISO 14971 – the residual risk must be acceptable. To recall, residual risk is the risk that remains after control measures have been taken. Here’s where things get a little tricky.

In cases where the residual risk is unacceptable, one is supposed to perform a risk benefit analysis to determine if benefits of the medical procedure performed by the device outweigh any possible residual risk.

To frame this discussion, consider two types of residual risk:

 

 

1.       A residual risk from a known issue, such as an interference, where eliminating this risk is not “practical “

2.       The overall residual risk from unknown issues. A certain amount of effort is used to search for risks (e.g., through FMEA, FTA, and FRACAS). At some point, more effort is considered not practical. Note: One can look at FDA recalls to see that unknown risks are often found in released products and lead to recalls (1).

Use of the word practical in ISO 14971 implies that in some cases, risk reduction is too expensive. This is not meant to be pejorative since everyone has limited resources.

In most cases in the standard, the cost benefit analysis is positioned as an analysis of the medical device’s clinical benefit to the patient vs. its risk. But ISO 14971 does point out an additional frame for the discussion.

“Those involved in making risk/benefit judgments have a responsibility to understand and take into account the technical, clinical, regulatory, economic, sociological and political context of their risk management decisions.”

To understand the issue, consider Type 1 diabetes as an example with the medical procedure being use of a home glucose meter. Because of risks 1 and 2 above, the glucose meter will fail and provide an erroneous result, albeit rarely. This is the current status and it is clear the benefit of the home glucose meter outweighs the risk (e.g., ADA recommendations to test for glucose). Yet, if one conducts a thought experiment and starts raising the frequency of (all) home glucose meter failures, simple decision analysis (2) still warrants use of the device. That is, measuring glucose, even if it occasionally (e.g., more often than rarely) gives an erroneous result, is better (clinically) than not measuring it.

If a company is working on a home glucose meter which provided an erroneous result too often (e.g., compared to existing meters), they will keep developing the meter until its failure rate is competitive. That is, there is a hierarchy of requirements for release for sale and often the competitive requirements (features needed to sell the product – including quality) are more stringent than any medical need or regulatory requirement (3).

Would you pay 2.5 million dollars to go to Cleveland?

Richard Fogoros suggests that there is a limit that we can spend for healthcare (4). To make this point, he says that if a plane could be built that could be survivable for most crashes, most people would not pay for an astronomical ticket price.

So regulators could require lower failure rates (less risk), causing companies to invest more, which would result in higher healthcare prices, but this is not done because it is unaffordable, hence the level of risk allowed is usually driven by competition. This is risk management but it is not the clinical benefit risk analysis described in ISO 14971– it is financial risk management.

References

1.       See http://www.accessdata.fda.gov/scripts/cdrh/cfdocs/cfRES/res.cfm

2.       Krouwer JS. Assay Development and Evaluation: A Manufacturer’s Perspective, AACC Press, Washington DC, 2002, Chapter 3.

3.       Krouwer JS. Assay Development and Evaluation: A Manufacturer’s Perspective, AACC Press, Washington DC, 2002, pp 38-39.

4.       Fogoros RN. Fixing American Healthcare. Publish or Perish Press, Pittsburgh, 2007.


FMEA goals in healthcare

November 17, 2007

goal

FMEA is now a common risk management tool used in healthcare. Here’s a quick test. If the words “minimal cut set” and “Petri net” don’t mean anything to you, then you probably don’t have a quantitative FMEA goal. The rest of this entry explains some things to know about goals.

A quantitative goal must also be measureable and realistic. For example, a goal for imprecision (reproducibility) for a clinical laboratory sodium assay, might be 4% CV. One can measure this goal using a variety of experiments including those defined by standards such as the CLSI standard EP5A2.

FMEA deals with risk. Some common pitfalls about risk goals are:

·         A goal that an event should never happen. For example, the NQF (National Quality Forum) implies such by talking about “never events.” Risk is probabilistic and can never be zero. It is possible that an estimated risk is so low that in lay terms, it may be said to never be possible to occur but this lay usage is different from a formal quantitative assessment.

·         Too many goals. The NQF has a list of 28 “never events.” Virtually all of these cause serious patient harm. A goal could be restated in terms of patient harm, as the combination of risk from any of the 28 events.

·         The institute of Healthcare Improvement (IHI) implies goals in terms of evaluating the RPN (risk priority number) before and after implementing control measures. Some problems here are:

o   One may improve this metric by reducing the risk of less severe events (without reducing risk of severe events)

o   A severe risk with the lowest (categorical) probability of occurrence may be ignored as a candidate for improvement, since its RPN won’t change, but there still may be a way to lower risk (and still have the same (categorical) probability of occurrence rank.

Quantitative FMEA goals are possible and are used in the nuclear power industry although fault trees are used instead of FMEAs. Quantitative fault trees are evaluated among other ways using “minimal cut sets” and “Petri nets.”

A reasonable non quantitative goal for FMEA is to learn more about potential failure modes. However, one should realize that it is difficult to assess how much is learned.

It is easy to have a quantitative FRACAS goal because it is easy to measure failure rates from observed failures, before and after implementing control measures.


Why FRACAS is important for medical device manufacturers

November 10, 2007

failure

I have commented before that FMEA (and FTA) are used to prevent potential errors and that FRACAS is used to prevent the recurrence of observed errors. FRACAS is easier than FMEA, FTA because for FRACAS:

·         no modeling is required with respect to enumerating the possible failure modes (errors) – one simply observes the errors

·         one can easily calculate a failure rate, which can also help  predict when a failure rate goal will be achieved

From a user’s perspective (e.g., medical device customer), it is of course more important to prevent errors than to prevent their recurrence (e.g., no melt down vs. preventing another melt down). However, if FRACAS is completed before release for sale, then the FRACAS activity of preventing the recurrence of observed errors is also preventing potential errors from the user’s perspective, because (again, from the user’s perspective) the clock is at zero – no errors have occurred yet because the system hasn’t been used. This is summarized in the following table.

Tool Before release for sale After release for sale
  Errors are: Control measures used to Effect of tool:
FMEA, FTA

enumerated

Prevent potential errors

Errors prevented

FRACAS

observed

Prevent recurrence of errors

Errors prevented

This does not mean that FMEA, FTA should be dropped. If a potential error has never been observed, one still must be sure that adequate control measures are in place.

So FRACAS is part of risk management in spite of the fact that it is not mentioned in ISO 14971.

Terms

FMEA – Failure mode Effects Analysis
FTA – Fault Tree Analysis
FRACAS – Failure Reporting And Corrective Action System
Failure Mode – Error


Some ISO 14971 risk control measures won’t reduce risk

November 6, 2007

risk

The previous entry dealt with some limitations of the ISO risk management standard for medical devices – ISO 14971. This entry covers one of the limitations in more detail.

ISO 14971 fails to embrace the error – detection – recovery scheme, since they omit recovery. To see the problem, consider a clinical laboratory example in which a serum sample is analyzed for potassium.

Error – As the specimen is processed, some error occurs (OK, I am not that good at making up errors), which hemolyzes the specimen. If the cause of the error is known, then steps might be taken to minimize or eliminate it.

Detection – A technician visually examines the specimen before it is analyzed. The hemolyzed specimen is detected.

Recovery – The technician does not analyze the specimen and notifies the appropriate party to get another specimen. The end result depends on the turn-around-time requirement after re-assay.

If the turn-around-time requirement is met, no effect of the original error is observed

If the turn-around-time requirement is not met, the effect of the original error is a delayed result.

In either of the above cases, the error – detection – recovery scheme has prevented an erroneous result as the effect of the original error. (OK, one could get an erroneous result in the new specimen).

Whereas recovery in this case seems trivial, what if just as the technician is ready to perform the recovery, he/she gets called away and never performs the recovery. There is a well known example of a failed recovery where the error was the incorrect leg was scheduled to be amputated – the error was detected – but the recovery failed.  Although, the correct leg was identified in the operating room schedule (successful detection), there were multiple operating rooms and not all schedules were corrected (failed recovery) (1).

Where recovery becomes even more of an issue is when detection and recovery are located in different organizations. This is actually a common occurrence. For example, manufacturers detect a problem (this could be an official recall) and it is up to the hospital or clinical laboratory to follow the manufacturer’s recommendation as to the recovery (e.g., discard that lot of reagent).

In the risk management standard ISO 14971, a recommended control measure presents the opportunity for a failed recovery. ISO 14971 provides a hierarchy of risk control measures (mitigations), which in order of preference are:

1.       Eliminate the error

2.       Detect the error

3.       Inform the user of the error possibility (e.g., state a limitation of the procedure)

Number 3 is really part of detection (e.g., the detection is communicated). Number 3 is also commonly used for interfering substances for in-vitro diagnostic assays. This error is the stepchild for diagnostic assays. For example, I once surveyed a year’s worth of Clinical Chemistry assay performance complaints and found that interferences were the main complaint (2). One can speculate how this happened. A clinician realized that some treatment or patient status was inconsistent with a laboratory result, the laboratory investigated, and the assay result was found to be incorrect with an interfering substance as the cause of the erroneous result.

So consider the risk control measure for an assay whereby the manufacturer lists 10 substances that may interfere with the assay. How can the clinical laboratory “recover” using this knowledge (e.g., detection)? They can’t. To determine the concentration level of ten substances in every specimen is impractical (too expensive). So to review this situation:

1.       Eliminate the error – the manufacturer has tried, but failed. Ten substances still interfere (at or above certain concentrations)

2.       Detect the error – the only “detection” possible is to inform the clinical laboratory. Note that all other common detection methods (external quality control, internal algorithms) fail.

3.       Recovery – The clinical laboratory cannot perform a recovery

One should realize that whereas this is an undesirable state, it may be the best possible way of doings things given the economic constraints. As stated in the previous entry, the manufacturer is doing the right thing (as are regulators and the clinical laboratory).

However, the problem is that ISO 14971 would have us believe, that all risk is now at an acceptable level, which is not the case. The erroneous result is likely to occur, after which a cause is likely to be found since the manufacturer has stated a list of possible interfering substances.

Also, as in the previous entry, patient awareness is needed to be added to the mix as a significant way to prevent patient harm.

References

1.       Scott D. Preventing medical mistakes. RN 2000;63:60-64.

2.       Krouwer JS. Estimating Total Analytical Error and Its Sources: Techniques to Improve Method Evaluation. Arch Pathol Lab Med 1992;116:726-731.


Improvement is needed for risk management guidance for in vitro medical devices

November 4, 2007

risk

When either a manufacturer or a clinical laboratory performs risk management, it is implied in the risk management standard ISO 14971 (and other literature) that risk management (1-4):

·         Identifies any product component or process step that has unacceptable risk

·         Through mitigations, reduces all remaining risk to an acceptable level

The purpose of this entry is to show that this doesn’t always happen and to suggest what to do about it.

Note 1: in order to understand ISO 14971, you need to learn ISO speak (“globally harmonized terminology”). For example, there are no lab “test results” or “assay results” – these are called “examination results.”

Note 2: ISO 14971 is intended for manufacturers. The section about risk management for clinical laboratories is based on my discussions with clinical laboratory directors.

The problem frame – ISO 14971 has a figure (H.1, page 61), which shows that there are three possibilities to prevent harm to the patient – the medical device manufacturer, the clinical laboratory, and the physician. ISO 14971 describes a mitigation* as either a way to prevent or detect an error. ISO fails to include recovery (5), which is a serious omission.

risk cascade

* I use here the word “mitigation” but should point out that mitigation has been banned from ISO speak and isn’t in ISO 14971.

An example problem– hCG (human chorionic gonadotropin) is an assay used to test for pregnancy. Such assays are subject to interferences, with HAMA (human anti-mouse antibody) a common example. In one case, a woman with an elevated hCG was diagnosed as having cancer and underwent chemotherapy, hysterectomy, and partial removal of one lung (6). Eventually, it was determined that she did not have cancer and all of the hCG assay results were incorrect due to HAMA interference – her actual hCG was not elevated. Cole studied this problem and found that it has occurred multiple times (7).

Manufacturer – One of the most difficult problems for manufacturers to overcome is lack of analytical specificity. This means that for many assays, a few results will be way off due to substances in the specimen that interfere with the assay. The fact that the rate of occurrence of this error is low is good, but as seen above, the consequences can result in severe harm to the patient. It is standard practice for manufacturers to accept the small rate of erroneous results and deal with the issue by stating these limitations in the product labeling (the package insert).

ISO 14971 provides the use of stating limitations as one method – albeit the least desirable method  – of risk reduction (H.4.1.c p70).

In the case of HAMA and other interferences, this warning is of little value to the laboratory since a laboratory has no information as to which specimens have HAMA or other interferences and it would be prohibitively expensive to try to determine this information (e.g., the recovery will fail). (I once had roof rack straps for my car which had a warning on the package – “stop every 25 miles to make sure the straps are secure”).

Clinical Laboratory – It was a surprise to me to learn from some clinical laboratory directors that:

·         They know that occasional erroneous hCG results are reported to clinicians, which ultimately causes patient harm

·         There is a quality control possibility to test a specimen for HAMA interferences by diluting it and rerunning it, but this is rejected as too expensive

·         Thus, clinical laboratory directors recognize the risk as unacceptable, but live with it

Analysis – The manufacturer is doing the right thing. If they could economically develop an assay without interferences, they would. Regulators who approve the assay are doing the right thing. Rejecting the assay would cause more harm to patients due to the lack of information of no assay result than the harm caused by a small number of erroneous results. The clinical laboratory directors are doing the right thing. If they reran too many samples, their costs would be too high and the laboratory would go out of business (more likely the laboratory director would be fired first and the rerunning process stopped).

The manufacturer notification of limitations, while necessary and conforming to ISO 14971, is ineffective to prevent risk. The clinical laboratory either does nothing to prevent risk or could potentially do the same thing as the manufacturer – issue a warning about potential interferences in the assay report to physicians.

Proposed Solutions – Recognize the problem. The current status quo of the risk management scheme is that after risk management has been performed there is no issue, which is wrong. Issuing limitations that are ineffective in reducing risk must be so acknowledged. The outcome of this risk management task for either the manufacturer or the clinical laboratory must result in the HAMA event as an undesirable* risk. It should be acknowledged that it is a work in progress to come up with a method – which must be economical – which reduces this risk to an acceptable level.

*Use of the term unacceptable risk makes no sense, since no one would tolerate unacceptable risk. Hence, a risk management program could through mitigations reduce previously unacceptable risk events to some combination of acceptable risk events and undesirable risk events.

The role of the physician and patient – I will leave the role of the physician to someone else. I suggest that the ISO figure above is wrong. It should have one more cascade; namely, the possibility for the patient to detect and recover from a problem and if this fails, then harm will occur. One should not discount patients as being not knowledgeable enough.  Through the use of the Internet, there is a growing movement for patients to take more control of their health. This includes assessing laboratory results which are playing an increasing role in medical decision making (for one example see reference 8).  So as part of a risk management program, one should include the patient.

References

1.       ISO 14971 http://www.iso.org/iso/iso_catalogue/catalogue_tc/catalogue_detail.htm?csnumber=38193

2.       Can’t afford to buy ISO 14971? Then read summaries in Ref. 2-4 http://www.devicelink.com/ivdt/archive/06/03/011.html

3.       http://www.devicelink.com/ivdt/archive/06/04/009.html

4.       http://www.devicelink.com/ivdt/archive/06/05/009.html

5.       See Figure 4 in Krouwer, JS. An Improved Failure Mode Effects Analysis for Hospitals. Archives of Pathology and Laboratory Medicine: Vol. 128, No. 6, pp. 663–667. See http://arpa.allenpress.com/pdfserv/10.1043%2F1543-2165(2004)128%3C663:AIFMEA%3E2.0.CO%3B2

6.       Sainato, D. How labs can minimize the risk of false positive results. Clin Lab News 2001;27:6-8.

7.       Cole, LA Rinne, KM Shahabi S.and Omrani A. False-Positive hCG Assay Results Leading to Unnecessary Surgery and Chemotherapy and Needless Occurrences of Diabetes and Coma. Clinical Chemistry. 1999;45:313-314.

8.       http://men.webmd.com/news/20030527/high-psa-level-check-again  


Who made ISO king

November 1, 2007

king

I have been working on a CLSI (Clinical Laboratory and Standards Institute) standard on risk management.  A preliminary version is available. This version needs revision and is getting it. As part of this process, comments are received and addressed using a consensus process. Having seen a few of the comments, one of them bothers me, not so much about the issue raised but the justification supplied, which is that the CLSI document deviates from the ISO standard on risk management – 14971. So this blog entry questions whether ISO documents should be taken as gospel.

I have commented  before on a specific ISO document – 9001. The title of my article says it all – “ISO 9001 has had no effect on quality in the in vitro medical diagnostics industry.”

ISO 14971 states things without providing any justification. There is a bibliography at the end but no links from the text to the bibliography. The document is not peer reviewed, although it undergoes its own consensus process. One is basically supposed to take ISO 14971 as correct because it is “based on an international group of experts”. I put the preceding phrase in quotes because, anyone serving on an ISO committee is automatically conferred expert status (this is true for CLSI committees as well).

So perhaps it is not even iconoclastic to question an ISO document, and one should certainly not suppress an idea because it deviates from an ISO document.