My battle with commutability

June 30, 2018

My efforts have failed to publish a critique of the three articles about commutability in the March 2018 issue of Clinical Chemistry. After a number of rejections, I got the message. So I will put forth my comments in a series of blog entries. For simplicity, when I refer to these three articles, I’ll simply say IFCC method.

First, I should make it clear that I’m not against commutability – who would be? Commutability of reference materials is of course a good thing. My critique is largely about the experiments suggested in these articles.

Interferences

Let’s look at the commutability experiment. One has a reference material and clinical samples and runs them on each of two methods (where one method is a reference method if it’s available). The average difference between the reference material and clinical samples is calculated and if there is no difference (more on “no difference” later), the reference material is commutable. If there is an outlier in the clinical samples, IFCC allows it to be removed.

Thus, a commutability experiment is essentially an experiment about interferences. If there is a difference in response between the reference material and the clinical samples, interferences must be the cause.

An interference is a form of bias in a measurement method due to the effect of a component other than the measurand. The effects of interferences come in all sizes. That is, each sample whether clinical or reference material contains a combination of substances which can interfere. The net effect of this combination can be a bias of size: none, small, medium, or large.

So I found it rather puzzling that IFCC suggests that interferences in a clinical sample can make the sample unsuitable for a commutability assessment and such clinical samples should be excluded from this assessment. Here’s why this is a problem.

If a lab knew there was an interference, why would they run the sample and report an erroneous result to clinicians? Hence the IFCC guidance implies one can exclude samples that might interfere. Thus, samples from the ICU or dialysis patients might be excluded because those patients have many medications, or lipemic samples or high or low hematocrits and so on and so on be excluded. But this is a prescription for bias since in actual routine clinical use, the excluded samples would be present. So how could the results of a commutability assessment remain valid?

It’s just as puzzling that the authors suggest that measurement procedures themselves need to be selected and procedures with “inadequate selectivity” are to be excluded.

For example, in glucose meters, chloride is an interfering substance, meaning if the chloride is high or low, there is a small bias in the result. Should one disqualify glucose meters or not allow samples with high or low chlorides?

And IFCC already allows outliers to be deleted from the analysis so the advice to exclude samples makes no sense to me and encourages bias.


Dealing with user error is not new

June 25, 2018

A few blogs ago, I reported that a committee had suggested that total error include all phases of testing. I had battled (and was unsuccessful) during the revision of EP27 to include user error as an error source.

Back in 1978, or 40 years ago, anesthesiology was beset with many serious injuries and deaths. One of the causes was user error. But as a classic paper showed, many of the user errors were in part caused by bad design of the instrumentation. (In flying, on some planes in the past, the gear and flap levers were next to each other which resulted in pilots raising the gear instead of flaps after landing). So there are ways to change the design to decrease the rate of user error.

The process used in improving anesthesiology was a FRACAS like process (Failure reporting and corrective action system) – the term FRACAS is not used in the article.


How to insult manufacturers

June 22, 2018

In the third article about commutability, one of the reasons for non commutability is said to be the wrong calibration model used for the assay.

First of all, an incorrect calibration model has nothing to do with commutability. Sure, it causes errors but so do a lot of things, like coding the software incorrectly.

But what’s worse is the example given. There are a bunch of points that are linear up to a certain level and then the response drops off. In this example, the calibration model chosen is linear, which of course is wrong. But come on people, do you really think a manufacturer would mess this up!