My thoughts on QC

June 15, 2019

Having just read a Clin Lab News article on QC which IMHO is misleading, here are my thoughts.

The purpose of QC is to determine whether a process is in control or not. In clinical chemistry, the process is running an assay. An out of control process is undesirable because it can yield unpredictable results.

QC by itself cannot guarantee the quality of patient results, even when the process is in control. This is because QC does not detect all errors (example an interference).

The quality of the results of an in control process is called the process capability of the process (e.g., its inherent accuracy). QC cannot change this, regardless of the QC rules that are used.

QC is like insurance, hence cost should not be considered in designing a QC program. That is, regardless of how low risk a failure mode is, one should never abandon QC.

Although running more QC can detect an out of control process sooner, any QC program should always protect patient results from being reported when an out of control condition is detected. Risk is not involved.

Advertisements

Stakeholders that participate in performance standards

May 11, 2019

Performance standards are used in several ways: to gain FDA approval, to make marketing claims, and to test assays after release for sale that are in routine use.

Using glucose meters as an example…

Endocrinologists, who care for people with diabetes, would be highly suited to writing standards. They are in a position to know the magnitude of error that will cause an incorrect treatment decision.

FDA would also be suited with statisticians, biochemists, and physicians.

Companies through their regulatory affairs people know their systems better than anyone, although one can argue that their main goal is to create a standard that is as least burdensome as possible.

So in the case of glucose meters, at least for the 2003 ISO 15197 standard, regulatory affairs people ran the show.


Just published

May 8, 2019

The article, “Getting More Information From Glucose Meter Evaluations” has just been published in the Journal of Diabetes Science and Technology.

Our article makes several points. In the ISO 15197 glucose meter standard (2013 edition), one is supposed to prepare a table showing the percentage of results in system accuracy within 5, 10, and 15 mg/dL. Our recommendation is to graph these results in a mountain plot – it is a  perfect example of when a mountain plot should be used.

Now I must confess that until we prepared this paper, I had not read ISO 15197 (2013). But based on some reviewer comments, it was clear that I had to bite the bullet, send money to ISO and get the standard. Reading it was an eye opener. The accuracy requirement is:

95% within ± 15 mg/dL (< 100 mg/dL) and within ± 15% (> 100 mg/dL) and
99% within the A and B zones of an error grid

I knew this. But what I didn’t know until I read the standard is user error from the intended population is excluded from this accuracy protocol. Moreover, even the healthcare professionals performing this study could exclude any result if they thought they made an error. I can imagine how this might work: That result can’t be right…

In any case, as previously mentioned in this blog, in the section when users are tested, the requirement for 99% of the results to be within the A and B zones of an error grid was dropped.

In the section where results may be excluded, failure to obtain a result is listed since if there’s no result, you can’t get a difference from reference. But there’s no requirement for the percentage of times a result can be obtained. This is ironic since section 5 is devoted to reliability. How can you have a section on reliability without a failure rate metric?


Tips to get your assay approved by the FDA

April 1, 2019

 

 

 

 

 

 

  1. Always tell the truth.
  2. Don’t offer information that wasn’t asked for. As an example,
    FDA: Your study is acceptable.
    You: We have another study that also confirms that.
    FDA: Oh, tell me about it… Result is a 6 week delay.
  3. Don’t speculate. As an example,
    FDA: What caused that outlier?
    You: We think it might be an interfering substance.
    FDA: Oh, Let’s review your interference studies…
  4. Know when to say yes and when to say no.
    Agree to change wording, graphs, and so on. Also agree to change calculation methods even when you think your original methods are correct. Challenge a finding that requires you to repeat or provide new studies, unless you agree.
  5. Don’t submit data that doesn’t meet specifications. Doesn’t sound smart but I’ve seen it happen.

The value of error grids

March 29, 2019

My colleague and I sang the praises of error grids as a way to specify performance – for any assay. To recall, here are some of the benefits:

  1. Unlike most specifications, the limits can change with concentration
  2. Unlike most specifications, the limits need not be symmetrical
  3. Most specifications have one set of limits, implying that results within limits cause no harm and results outside of limits cause harm. Error grid have multiple sets of limits – called zones – whereby harm can be none, minor, or major.
  4. Error grid zones account of 100% of the results – they cover the XY space of candidate assay vs reference assay. Most specifications cover 95% or 99% of results, leaving the balance unspecified.

Krouwer JS and Cembrowski GS Towards more complete specifications for acceptable analytical performance – a plea for error grid analysis. Clinical Chemistry and Laboratory Medicine, 2011;49:1127-1130.


Summary of what’s wrong with the ISO 15197 2013 glucose meter standard

March 24, 2019

  1. Minimum system accuracy performance criteria (6.3.3) – I previously commented that the word “minimum” is silly. One either meets or does not meet the requirements. But the big problem is Notes 1 and 2 in this section that says that the test is not to be carried out by actual users. Thus, the protocol is biased by excluding user error. In the section where users are included, the acceptance criteria (8.2) drop the requirement for 99% of the results to be within the A and B zones of an error grid. The requirement for 95% of the results to be within ± 15 mg/dL below 100 and within ± 15% above 100 remain. Thus 5% of the results are unspecified, same as the 2003 version. This means that for people who test 3 times daily, they could have a dangerous error for their meter once a week in spite of their meter meeting the ISO 15197 standard.
  2. Safety and Reliability Testing (Sections 5) – A hallmark of reliability testing is the frequency of failures to obtain a result. There is nothing in this section (or elsewhere in the standard) to tally the frequency of failed results or specified limits for percent failures. This makes no sense for a standard about a POC test that is needed emergently. Failure to obtain a result is a frequent event in the FDA adverse event database for glucose meters.
  3. If you want to see who wrote the standard, you can’t. As with all ISO standards, there is no list of authors or members who served on the committee.

Who influences CMS and CDC?

March 23, 2019

A recent editorial disagrees with the proposed CLIA limits for HbA1c provided by CMS and CDC (The Need for Accuracy in Hemoglobin A1c Proficiency Testing: Why the Proposed CLIA Rule of 2019 Is a Step Backward) online in J Diabetes Science and Technology. The proposed CLIA limits are ± 10% – the NGSP limits are 5%, and the CAP limits 6%. Reading the Federal Register, I don’t understand the basis of the 10%.

This reminds me of another CMS decree in the early 2000s – Equivalent Quality Control. Under this program, a lab director could run quality control for 10 days as well as the automated internal quality checks and decide whether the two were equivalent. If the answer was yes, the frequency of quality control could be reduced to once a month. This made no sense!