Tips to get your assay approved by the FDA

April 1, 2019

 

 

 

 

 

 

  1. Always tell the truth.
  2. Don’t offer information that wasn’t asked for. As an example,
    FDA: Your study is acceptable.
    You: We have another study that also confirms that.
    FDA: Oh, tell me about it… Result is a 6 week delay.
  3. Don’t speculate. As an example,
    FDA: What caused that outlier?
    You: We think it might be an interfering substance.
    FDA: Oh, Let’s review your interference studies…
  4. Know when to say yes and when to say no.
    Agree to change wording, graphs, and so on. Also agree to change calculation methods even when you think your original methods are correct. Challenge a finding that requires you to repeat or provide new studies, unless you agree.
  5. Don’t submit data that doesn’t meet specifications. Doesn’t sound smart but I’ve seen it happen.
Advertisements

A bone to pick with AACC

August 17, 2018

So I registered and was at the AACC meeting in Chicago, but I couldn’t make all of the scientific sessions of interest to me.

There was a link to download handouts for any session (great) but there were 2 problems:

  1. Some of the sessions were stated as “not yet available.” While perhaps understandable before the meeting, they are still listed as not yet available, 2 weeks after the meeting ended.
  2. Of the sessions that I downloaded, some of the material was unreadable (particularly graphs).

AACC needs to improve the quality of their meeting.


Who performed your test?

August 15, 2018

The conventional wisdom is that if you require some medical procedure based on the result of a medical test, before submitting to that procedure, you should have the test repeated.

Good advice, but more advice needs to be added. You should have the test repeated by a different method. In my book, I describe a case where due to suspected cancer from an elevated hCG result, the hCG assay was repeated 45 times while unnecessary treatment including surgery was performed. It wasn’t until the assay was repeated on a different method that in fact the hCG result was found to be normal – the woman never had cancer.

But my lab report that I view online, while having graphs of previous results and inclusion of expected normal ranges, does not provide any information as to what method or manufacturer was used to perform the test. I have seen a lab report from Europe where the manufacturer is listed. This information should be on lab reports.


A selected catalog of critiques

July 12, 2018

The highlighted articles can be viewed without a subscription.

Imprecision calculations – Evaluations commonly reported total imprecision as less than within-run imprecision. Correct calculations are explained.

How to Improve Estimates of Imprecision Clin. Chem., 30, 290-292 (1984)

Total error models – Modeling total error by adding imprecision to bias is popular but fails to account for several other error sources. These articles (and others) provide alternative models.

Estimating Total Analytical Error and Its Sources: Techniques to Improve Method Evaluation Arch Pathol Lab Med., 116, 726-731 (1992)

Setting Performance Goals and Evaluating Total Analytical Error for Diagnostic Assays Clin. Chem., 48: 919-927 (2002)

Too optimistic project completion schedules – Project managers would forecast completion dates that were never met. The article shows how to get better completion estimates using past data.

Beware the Percent Completion Metric Research Technology Management, 41, 13-15, (1998)

GUM – The guide to the expression of uncertainty in measurement was suggested to be performed by hospital labs. There’s no way a hospital lab could carry out this work.

A Critique of the GUM Method of Estimating and Reporting Uncertainty in Diagnostic Assays Clin. Chem., 49:1818-1821 (2003)

ISO 9001 – There have been many valuable quality initiatives. In the late 80s, ISO 9001 was a program to certify that companies that passed had high quality. But it was nothing more than documentation – it did nothing to improve quality. Maybe the lab equivalent ISO 15189 is the same.

ISO 9001 has had no effect on quality in the in-vitro medical diagnostics industry Accred. Qual. Assur., 9: 39-43 (2004)

Bland-Altman plots – Bland-Altman plots (difference plots) suggest plotting the difference of y-x vs. (y+x)/2 in order to prevent spurious correlations. But the article below shows that if x is a reference method, following Bland and Altman’s advice will produce a spurious correlation. The difference of y-x vs x should be plotted when x is a reference method.

Why Bland-Altman plots should use X, not (Y+X)/2 when X is a reference method Statistics in Medicine, 27 778-780 (2008)

Six Sigma – This metric is often presented as a sole quality measure but it basically measures only average bias and imprecision. As this article shows there can be severe problems with an assay even when it has a high sigma.

Six Sigma can be dangerous to your health Accred Qual Assur 14 49-52 (2009)

Glucose standards – The glucose meter standard ISO 15197 has flaws. This letter pointed out what the experts missed in a question and answer forum.

Wrong thinking about glucose standards Clin Chem, 56 874-875 (2010)

POCT12-A3 – The article explains flaws in this CLSI glucose standard

The new glucose standard POCT12-A3 misses the mark Journal of Diabetes Science and Technology, September 7 1400–1402 (2013)

Regulatory approval evaluations – The performance of assays during regulatory evaluations is often quite better than when the assays are in the field. The articles gives some reasons why.

Biases in clinical trials performed for regulatory approval Accred Qual Assur, 20:437-439 (2015)

MARD – This metric to classify glucose meter quality leaves a lot to be desired. The article below suggests an alternative

Improving the Glucose Meter Error Grid with the Taguchi Loss Function Journal of Diabetes Science and Technology, 10 967-970 (2016)

Interferences – Motivated by a recent paper where interferences were treated almost as a new discovery (and given a new name), this paper discusses how specifications and analyses methods can be improved by accounting for interferences. And I also mention how the CLSI EP7 standard reports interferences incorrectly and could cause problems for labs. 

Interferences, a neglected error source. Accred. Qual. Assur. 23(3):189-192 (2018).


How to insult manufacturers

June 22, 2018

In the third article about commutability, one of the reasons for non commutability is said to be the wrong calibration model used for the assay.

First of all, an incorrect calibration model has nothing to do with commutability. Sure, it causes errors but so do a lot of things, like coding the software incorrectly.

But what’s worse is the example given. There are a bunch of points that are linear up to a certain level and then the response drops off. In this example, the calibration model chosen is linear, which of course is wrong. But come on people, do you really think a manufacturer would mess this up!


Advice to prevent another Theranos

May 28, 2018

Not surprising that there a bunch of articles about Theranos. An article here from Clin Chem Lab Med wrote “We highlight the importance of transparency and the unacceptability of fraud and false claims.”

And one of the items in the table that followed was:

“Do not make false claims about products…”

Is the above really worth publishing? On the other hand, the article talks about an upcoming movie about Theranos starring Jennifer Lawrence. Now that is worth publishing.


Big errors and little errors

May 27, 2018

In clinical assay evaluations, most of the time, focus is on “little” errors. What I mean by little errors are average bias and imprecision that exceed goals. Now I don’t mean to be pejorative about little errors since if bias or imprecision don’t meet goals, the assay is unsuitable. One of the reasons to distinguish between big and little errors is that often in evaluations, big errors are discarded as outliers. This is especially true in proficiency surveys but even for a simple method comparison, one is justified in discarding an outlier because the value would otherwise perturb the bias and imprecision estimates.

But big errors cause big problems and most evaluations focus on little errors, so how are big errors studied? Other than running thousands of samples, a valuable technique is to perform a FMEA (Failure Mode Effects Analysis). This can or should cover user error, software, interferences, besides the usual items. A FMEA study is often not very enthusiastically received but it is a necessary step in trying to ensure that an assay is free from both big and little errors. Of course, even with a completed FMEA, there are no guarantees.