Do manufacturers always publish that their glucose meter is best?

February 27, 2017

dsc00230edp

After reading an evaluation article where the conclusion was that the manufacturer’s glucose meter was best, I went through some journals to see how often this happens. I searched 2 journals through 2012-2016.

To be included in the list below, the article had to meet the following criteria:

  • The study was sponsored by a manufacturer
  • There were 2 or more meters, not all by made by the sponsor

The results are shown below. Eight articles met the criteria. Some articles were clear whereby the article’s conclusion was that the manufacturer’s glucose meter was best. In other articles, I had to look through the data. If the manufacturer’s glucose meter was best, the score was 1, if some other manufacturer’s glucose meter was best, the score was 0, and in one case, it was a tie so the score was 0.5. The N refers to the number of meters in the article.

Reference Company Meter Winner N Score

J Diabetes Sci Technol

2016 1316-1323 Sanofi BGStar / iBGStar BGStar / iBGStar 5 1
2015 1041-1050 Bayer Contour Contour Acc-Chek 4 0.5
2013 1294-1304 Bayer Contour Contour 5 1
2012 1060-1075 Roche Accuchek None declared best but Freestyle was best 43 0
2012 547-554 Abbot Optimum Xceed Optimum Xceed 6 1

Diabetes Technology and Therapeutics

2014 8-15 Bayer Contour Contour 5 1
2014 113-122 Ypsomed Mylife Para / Mylife  Unio None declared best 12 0
2012 330-337 Abbot Freestyle Freestyle 5 1

So 69% of the time the manufacturer’s glucose meter was best.


EFLM – after three years it’s disappointing

February 15, 2017

dsc_0828edp

Thanks to Sten Westgard, whose website alerted me to an article about analytical performance specifications. Thanks also to Clin Chem Lab Med for making this article available without a subscription.

To recall, the EFLM task group was going to fill in details about performance specifications as initially described by the Milan conference held in 2014.

Basically, what this paper does is to assign analytes (not all analytes that can be measured but a subset) to one of three categories for how to arrive at analytical performance specifications: clinical outcomes, biological variation, or state of the art. Note that no specifications are provided – only which analytes are in which categories. Doesn’t seem like this should take three years.

And I don’t agree with this paper.

For one, talking about “analytical” performance specifications implies that user error or other mishaps that cause errors are not part of the deal. This is crazy because the preferred option is the effect of assay error on clinical outcomes. It makes no sense to exclude errors just because their source is not analytical.

I don’t agree with the second and third options ever playing a role (biological variation and state of the art). My reasoning follows:

If a clinician orders an assay, the test must have some use for the clinician to decide on treatment. If this is not the case, the only reason a clinician would order such an assay is that he has to make a boat payment and needs the funds.

So, for example say the clinician will provide treatment A (often no treatment) if the result falls within X1-X2. If the result is greater than X2, then the clinician will provide treatment B. Of course this is oversimplified since other factors are involved besides the assay result. But if the assay is 10 times X2 but truth is between X1 and X2, then the clinician will make the wrong treatment decision based on laboratory error. I submit this model applies to all assays and that if one assembles clinician opinion, one can construct error specifications (see last sentence at bottom).

Other comments:

In the event that outcome studies do not exist, authors encourage double-blind randomized controlled trials. Get real people – these studies would never be approved! (e.g., feeding clinicians the wrong answer to see what happens).

The authors also suggest simulation studies which I have previously commented that their premier simulation study which was cited was flawed (Boyd Bruns glucose meter simulations).

The Milan 2014 conference rejected the use of clinician opinion to establish performance specifications. I don’t see how clinical chemists and pathologists trump clinicians.


Revisiting Bland Altman plots and a paranoia

February 13, 2017

feeder6936edp

Over 10 years ago I submitted a paper critiquing Bland Altman plots. Since the original publication of Bland Altman plots was the most cited paper ever in The Lancet, I submitted my paper with some temerity.

Briefly, the issue is this. When one is comparing two methods, Bland Altman suggest plotting the difference (Y-X) vs. the average of the two methods (Y+X)/2. Bland Altman also stated in a later paper (1) that even if the X method is a reference method (they use the term gold standard) one should still plot the difference against the average and not doing so is misguided and will lead to correlations. They attempted to prove this with formulas.

Not being so great in math, but doubting their premise, I did some simulations. The results are shown in the table below. Basically, this says that when you have two field methods you should plot the difference vs. (Y+X)/2 as Bland Altman suggest. But when you have field and a reference method, you should plot the difference vs. X. The values in the table are the correlation coefficients for Y-X vs. (Y-X)/2 and Y-X vs. X (after repeated simulations where Y is always a field method and X is either a field method or a reference method).

 

Case X=X X=(X+Y)/2
X=Reference method ~0 ~0.1
X=Field method ~-0.12 ~0

 

The paranoia

I submitted my paper as a technical brief to Clin Chem and included my simulation program as an appendix. After being told to recast the paper as a Letter, it was rejected. I submitted it to another journal (I think it was Clin Chem Lab Med) and it was also rejected. I then submitted my letter to Statistics in Medicine (2) where it was accepted.

Now in the lab medicine field, I am known by the other statisticians, and sometimes have published papers not to their liking. Regarding Statistics in Medicine, I am an unknown and lab medicine is a small part of Statistics in Medicine. So maybe, my paper was judged solely on merit or maybe I’m just paranoid.

References

  1. Bland JM, Altman DG. (1995) Comparing methods of measurement – why plotting difference against standard method is misleading. Lancet, 346, 1085-1087.
  1. Krouwer JS Why Bland-Altman plots should use X, not (Y+X)/2 when X is a reference method. Statistics in Medicine, 2008;27:778-780.