Two examples of why interferences are important and a comment about a “novel approach” to interferences

September 29, 2017

I had occasion to read an open access paper “full method validation in clinical chemistry.” So with that title, one expects the big picture and this is what this paper has. But when it discusses analytical method validation, the concept of testing for interfering substances is missing. Precision, bias, and commutability are the topics covered. Now one can say that an interference will cause a bias and this is true but nowhere do these authors mention testing for interfering substances.

The problem is that eventually these papers are turned into guidelines, such as ISO 15197, which is the guideline for glucose meters. And this guideline allows 1% of the results to be unspecified (it used to be 5%). This means that an interfering substance could cause a large error resulting in serious harm in 1% of the results. Given the frequency of glucose meter testing, this translates to one potentially dangerous result per month for an acceptable (according to ISO 15197) glucose meter. If one paid more attention to interfering substances and the fact that they can be large and cause severe patient harm, the guideline may have not have allowed 1% of the results to remain unspecified.

I attended a local AACC talk given by Dr. Inker about GFR. The talk, which was very good had a slide about a paper about creatinine interferences. After the talk, I asked Dr. Inker how she dealt with creatinine interferences on a practical level. She said there was no way to deal with this issue, which was echoed by the lab people there.

Finally, there is a paper by Dr. Plebani, who cites the paper: Vogeser M, Seger C. Irregular analytical errors in diagnostic testing – a novel concept. (Clin Chem Lab Med 2017, ahead of print). Ok, since this is not an open access paper, I didn’t read it but what I can tell from Dr. Plebani comments, the cited authors have discovered the concept of interfering substances and think that people should devote attention to it. Duh! And particularly irksome is the suggestion by Vogeser and Seger of “we suggest the introduction of a new term called the irregular (individual) analytical error.” What’s wrong with interference?

Advertisements

Overinterpretation of results – bad science

June 16, 2017

A recent article (subscription required) in Clinical Chemistry suggests that in many accuracy studies the results are overinterpreted. The authors go on to say that there is evidence of “spin” in the conclusions. All of this is a euphemistic way of saying the conclusions are not supported by the study that was conducted, which means the science is faulty.

As an aside, early in the article, the authors imply that overinterpretation can lead to false positives, which can cause potential overdiagnosis. I have commented that the word overdiagnosis makes no sense.

But otherwise, I can relate to what the authors are saying – I have many posts of a similar nature. For example…

I have commented that Westgard’s total error analysis while useful does not live up to his claims of being able to determine the quality of a measurement procedure.

I commented that a troponin assay was declared “a sensitive and precise assay for the measurement of cTnI” in spite of the fact that in the results section the assay failed the ESC- ACC (European Society of Cardiology – American College of Cardiology) guidelines for imprecision.

I published observations that most clinical trials conducted to gain regulatory approval for an assay are biased.

I suggested that a recommendation section should be part of Clinical Chemistry articles. There is something about the action verbs in a recommendation that make people think twice.

It would have been interesting if the authors determined how many of the studies were funded by industry, but on the other hand, you don’t have to be part of industry to state conclusions that are not supported by the results.

 


Comparison of company vs. standards organization specifications

April 11, 2017

For almost all of my career, I’ve been working to determine performance specifications for assays, including the protocol and data analysis methods to see if performance has been met. This work has been performed mainly for companies but occasionally also for standards groups. There are some big differences.

Within a company, the specifications are very important:

If the product is released too soon, before the required performance has been met, the product may be recalled, patients may suffer harm, and overall the company may suffer financially.

If the product is released too late, the company will definitely suffer financially as “time to market” has been shown in financial models to be a key success factor in achieving profit goals.

Company specifications are built around two main factors – what performance is competitive and how can the company be sure that no patients will be harmed. In my experience this has simply led to two goals – 95% of the differences between the company assay and reference should be within limits which guarantee a competitive assay and no differences should be large enough to cause patient harm (a clinical standard).

Standards groups seem to have a different outlook. Without being overly cynical, the standards adopted are often to guarantee that no company’s assay will fail the specification. Thus, 95% of differences between the assay and reference should be within these limits. There is almost never a mention about larger errors which may cause patient harm.

Thus, it is somewhat ironic that company specifications are usually more difficult to achieve then specifications published by the standards organizations.


Antwerp talk about total error

March 12, 2017

Looking at my blog stats, I see that a lot of people are reading the total analytical error vs. total error post. So, below are the slides from a talk that I gave at a conference in Antwerp in 2016 called The “total” in total error. The slides have been updated. Because it is a talk, the slides are not as effective as the talk.

 

 

TotalError


EFLM – after three years it’s disappointing

February 15, 2017

dsc_0828edp

Thanks to Sten Westgard, whose website alerted me to an article about analytical performance specifications. Thanks also to Clin Chem Lab Med for making this article available without a subscription.

To recall, the EFLM task group was going to fill in details about performance specifications as initially described by the Milan conference held in 2014.

Basically, what this paper does is to assign analytes (not all analytes that can be measured but a subset) to one of three categories for how to arrive at analytical performance specifications: clinical outcomes, biological variation, or state of the art. Note that no specifications are provided – only which analytes are in which categories. Doesn’t seem like this should take three years.

And I don’t agree with this paper.

For one, talking about “analytical” performance specifications implies that user error or other mishaps that cause errors are not part of the deal. This is crazy because the preferred option is the effect of assay error on clinical outcomes. It makes no sense to exclude errors just because their source is not analytical.

I don’t agree with the second and third options ever playing a role (biological variation and state of the art). My reasoning follows:

If a clinician orders an assay, the test must have some use for the clinician to decide on treatment. If this is not the case, the only reason a clinician would order such an assay is that he has to make a boat payment and needs the funds.

So, for example say the clinician will provide treatment A (often no treatment) if the result falls within X1-X2. If the result is greater than X2, then the clinician will provide treatment B. Of course this is oversimplified since other factors are involved besides the assay result. But if the assay is 10 times X2 but truth is between X1 and X2, then the clinician will make the wrong treatment decision based on laboratory error. I submit this model applies to all assays and that if one assembles clinician opinion, one can construct error specifications (see last sentence at bottom).

Other comments:

In the event that outcome studies do not exist, authors encourage double-blind randomized controlled trials. Get real people – these studies would never be approved! (e.g., feeding clinicians the wrong answer to see what happens).

The authors also suggest simulation studies which I have previously commented that their premier simulation study which was cited was flawed (Boyd Bruns glucose meter simulations).

The Milan 2014 conference rejected the use of clinician opinion to establish performance specifications. I don’t see how clinical chemists and pathologists trump clinicians.


Help with sigma metric analysis

January 27, 2017

dsc_0900edp

I’ve been interested in glucose meter specifications and evaluations. There are three glucose meter specifications sources:

FDA glucose meter guidance
ISO 15197:2013
glucose meter error grids

There are various ways to evaluate glucose meter performance. What I wished to look at was the combination of sigma metric analysis and the error grid. I found this article about the sigma metric analysis and glucose meters.

After looking at this, I understand how to construct these so-called method decision charts (MEDX). But here’s my problem. In these charts, the total allowable error TEa is a constant – this is not the case for TEa for error grids. The TEa changes with the glucose concentration. Moreover, it is not even the same at a specific glucose concentration because the “A” zone limits of an error grid (I’m using the Parkes error grid) are not symmetrical.

I have simulated data with a fixed bias and constant CV throughout the glucose meter range. But with a changing TEa, the estimated sigma also changes with glucose concentration.

So I’m not sure how to proceed.


What needs to be measured to ensure the clinical usefulness of an assay

July 19, 2016

DSC_1651edp

I was happy to see an editorial which IMHO states the required error components that need to be understood to ensure the clinical usefulness of an assay. Of course bias and imprecision are mentioned. But in addition, the author mentions freedom from interferences and pre and post analytical errors.

One can ask don’t interferences and pre and post analytical errors cause bias? Since the answer is yes, then why do these terms need to be mentioned if it was already stated that bias is to be measured. The reason is the way bias is measured in many cases will fail to detect the biases from interferences and pre and post analytical errors.

For example, if regression is used, average bias will be estimated, not the individual biases that can occur from interferences.

If σ is estimated, this usually involves bias measured from either regression or from quality control samples so again interference biases don’t get counted.

Finally, most of the studies are done in ways in which pre and post analytical errors have been minimized – the studies are performed outside of the routine way of processing patient samples. Hence, to ensure the clinical usefulness of an assay, one must construct protocols that measure all of the error components mentioned in the first paragraph.