January 30, 2016
OK, I admit it I’m a little slow and behind the times. By looking at the email for the Feb. table of contents for Clinical Chemistry, I discovered online content that I never noticed – a section about “Pearls of Laboratory Medicine.”
After looking at a few of the ones supplied, I realized there was a lot more available by joining the Clinical Chemistry Training Council site.
Wow, I was impressed by the amount of available material. I may not agree with everything there – as always – but this looks like a valuable resource. And I like the fact that material is presented by slides and webcasts as well as more traditional means.
January 19, 2016
When the 2003 ISO standard for glucose meter performance was prepared, the regulatory affairs people of industry controlled the standard. The standard called for 95% of results above 75 mg/dL to be within a total error of ± 20%. The standard was said to be based on medical requirements – clearly it was not based on state of the art since glucose meters perform better.
A problem probably unforeseen by these regulatory people was that a bunch of new players entered the glucose meter market and of course had no trouble getting FDA approval – the FDA used the ISO standard in its approval process. The number of meter brands on the market grew from 32 in 2005 to 87 in 2014. And some of the new meters sold their strips at a much lower price than the major manufacturers. This caused the four major companies to lose some market share.
Industry still plays a dominant role in glucose meter standards, but it seems that the original regulatory affairs people are out. Now, industry is working with the Diabetes Technology Society to certify glucose meters under new performance standards. Thus, meters that have FDA approval will be tested according to the tighter 2013 ISO standard and only meters that pass will receive a seal of approval from the Diabetes Technology Society.
Klonoff DC, Lias C, Beck S Development of the Diabetes Technology Society Blood Glucose Monitor System Surveillance Protocol. J Diabetes Sci Technol, in press. available at http://dst.sagepub.com/content/early/2015/12/10/1932296815614587.full.pdf+html
January 2, 2016
A couple of entries ago, I mentioned an upcoming publication. It has now appeared in the J Diab Sci Techol ahead of print (subscription required).
Basically, each glucose meter result in the A zone of an error grid has its difference from reference squared and that value scaled ranging from 0 (result=reference) to 1 (result is on the A zone boundary). If the value is outside of the A zone no A zone Taguchi loss value is calculated. These “Taguchi loss” values are then averaged to give the average Taguchi loss (ATL). One can calculate the ATL for all zones, although I did not do this for the article.
The ATL represents a way to distinguish performance among different glucose meters with similar performance (most values in the A zone and no values beyond the B zone). I believe it is an improvement over the MARD statistic (mean absolute relative deviation).
December 5, 2015
So I was reading an article about glucose meter performance and I came across the MARD (mean absolute relative difference) statistic. I have seen this before – it is used for glucose meters and almost nowhere else. What bothered me was that the paper used MARD as a summary statement about the performance for different meters – the problem is MARD has so many problems I wrote a paper critiquing MARD and submitted it. As soon as I clicked the submit button, I realized I had left out an important element; namely why were people using MARD?
In any case, I got mixed reviews about my paper and the editor said I could try to submit a revision. But the more I thought about it, I realized that my paper was not that good. I was going to drop it when it occurred to me, rather than complain about MARD, I might be able to come up with a better statistic. After all, people use MARD because they want to differentiate meters that appear to have similar performance when analyzed with error grids.
So I wrote a new paper which provides an alternative to MARD. It has been accepted and will appear shortly.
November 15, 2015
In the Milan conference (1st EFLM Strategic Conference Defining analytical performance goals) one of the papers (1) suggests that analytical performance specifications should be prepared from indirect outcome studies using decision analysis. The only example presented is a simulation, which is not decision analysis. Decision analysis is also discussed in this section but on an abstract level.
I have performed decision analysis and discuss it in my book (2). Decision analysis requires a quantitative variable that is either maximized or minimized. In my case, we performed financial decision analysis and the parameter to be maximized was net present value (NPV) of future cash flows. The Milan paper never identifies a quantitative parameter to be optimized.
I don’t understand how decision analysis can be recommended without any known examples or details about how one would go about it.
- Horvath AR, Bossuyt PMM, Sandberg S, et.al. Setting analytical performance specifications based on outcome studies – is it possible? Clin Chem Lab Med 2015; 53(6): 841–848.
- Assay Development and Evaluation: A Manufacturer’s Perspective. Jan S. Krouwer, AACC Press, Washington DC, 2002, see Chapter 3.
November 5, 2015
In my blog post Total Error and Milan, I mentioned how clinician surveys were in the draft consensus statement but dropped from the final and published consensus statement. (The draft consensus statement is no longer available on the EFLM site).
I had occasion to read a paper from the Milan conference (1) where it is clear why clinician surveys were dropped.
“RCVs from vignettes should probably not be used on their own as a basis for setting analytical performance specifications, since clinicians seem “uninformed” regarding important principles.”
RCV = reference change values.
For an example of how clinician surveys were used to set analytical performance specifications, see reference 2.
- Thue G and Sandberg S: Analytical performance specifications based on how clinicians use laboratory tests Clin Chem Lab Med 2015; 53(6): 857–862.
- Klonoff DC, Lias C, Vigersky R, et al The surveillance error grid. J Diabetes Sci Technol. 2014;8:658-672.