January 19, 2016
When the 2003 ISO standard for glucose meter performance was prepared, the regulatory affairs people of industry controlled the standard. The standard called for 95% of results above 75 mg/dL to be within a total error of ± 20%. The standard was said to be based on medical requirements – clearly it was not based on state of the art since glucose meters perform better.
A problem probably unforeseen by these regulatory people was that a bunch of new players entered the glucose meter market and of course had no trouble getting FDA approval – the FDA used the ISO standard in its approval process. The number of meter brands on the market grew from 32 in 2005 to 87 in 2014. And some of the new meters sold their strips at a much lower price than the major manufacturers. This caused the four major companies to lose some market share.
Industry still plays a dominant role in glucose meter standards, but it seems that the original regulatory affairs people are out. Now, industry is working with the Diabetes Technology Society to certify glucose meters under new performance standards. Thus, meters that have FDA approval will be tested according to the tighter 2013 ISO standard and only meters that pass will receive a seal of approval from the Diabetes Technology Society.
Klonoff DC, Lias C, Beck S Development of the Diabetes Technology Society Blood Glucose Monitor System Surveillance Protocol. J Diabetes Sci Technol, in press. available at http://dst.sagepub.com/content/early/2015/12/10/1932296815614587.full.pdf+html
October 1, 2015
I was working on a paper and decided to comply with the nomenclature expected by the journal and used the word “measurand.” The word was underlined as unknown by the dictionary used by “Word.” I went to an online version of the Merriam Webster dictionary and no match for found for measurand. So much for ISO nomenclature.
August 27, 2015
The Westgard web has some comments about IQCP.
Here are mine.
- There is no distinction between potential errors and errors that have occurred. This is non-standard. In traditional risk management different methods are used for potential errors vs. errors that have occurred. For example on page 12 of the IQCP book which focuses on specimen risks, “Kim” reviewed log books and noted errors. Yet on the same page, Kim is instructed to ask “What could go wrong.” The problem is that there are clearly errors that have occurred yet there could be potential new errors that have never occurred.
- The mitigation steps to reduce errors look phony. For example, an error source is: “Kim noted some specimens remained unprocessed for more than 60 minutes without being properly stored.” The suggested mitigation is: Train testing personnel to verify and document: Collection time and time of receipt in laboratory and proper storage and processing of specimen. The reason the mitigation sounds phony is that most labs would already have this training in place. The whole point of risk management is to put in place mitigations that don’t already exist.
- There is no measurement of error rates. Because there is no distinction between potential errors vs. errors that have occurred, there is a missed opportunity to measure error rates. In the real world, when errors occur and mitigations are put in place, the error rate is measured to determine the effectiveness of the mitigations.
- The word “Pareto” cannot be found in IQCP. Here is why this is a problem. In IQCP, for each section, a few errors are mentioned. In the real world, for either potential errors or those that have occurred, the number of errors is much larger. So much larger that there are not enough resources to deal with all errors. That is why the errors are classified and ranked (the ranking is often displayed as a Pareto chart). The errors at the top of the chart are dealt with. In the naïve IQCP, there is no need to classify or rank errors because all are dealt with. The same problem occurs in CLSI EP23 and ISO 14197.
Conclusion: One might infer that no one who participated in the writing of IQCP has ever performed actual risk management using standard methods or perhaps any methods.
April 11, 2015
Ever wonder why ISO or CLSI glucose standards use primarily one set of limits rather than an error grid? Here’s my explanation.
With an error grid – especially a glucose error grid – there are multiple sets of limits. Data inside of the innermost limit implies no harm to patients and data outside of the outermost limit implies serious injury or death. And of course there are limits in between the inner and outermost limits which range in harm to patients. Although the limits are provided without percentages of data that should be in any region, it is implied that there should be no results in the outermost limits.
With ISO or CLSI, the use of one primary set of limits (corresponding to the innermost limits of an error grid) relieves these standard organizations from having to even mention a case where serious injury or death may occur. And this is probably because these groups are dominated by regulatory affairs people from industry.
October 24, 2014
Recommended reading – CAP interview of Jim Westgard regarding lab QC over the last 30 years including the current focus on risk management: http://www.captodayonline.com/lab-qc-much-room-improvement/
July 31, 2014
There was a symposium about glucose meters with three outstanding talks. BTW, one nice feature of this year’s AACC meeting was that one could easily download each speaker’s presentation. The first talk by Dr. David Sacks reviewed the current glucose meter error grids:
the 2013 version of ISO 15197 for SMBG meters
the 2013 version of POCT12-A3 for hospital meters
the 2014 draft FDA guidance for SMBG meters
the 2014 draft FDA guidance for hospital meters
Dr. Sacks never mentioned that the 2014 draft FDA guidance for hospital meters says: don’t use the ISO standard – it does not adequately protect patients. Now, the FDA probably meant don’t use POCT12-A3, since that standard is for hospital meters, but the point is FDA is not happy with either the ISO or CLSI glucose meter standard, which is why they wrote their own.
After the talks, there was a question and answer session whereby Mitch Scott, the chair of the symposium, asked Dr. Sacks why the POCT12-A3 standard allows 2% of results to be unspecified (meters can have any values relative to reference). This is somewhat of a strange question since Dr. Scott was a member of the POCT12-A3 committee and previously answered this question himself in a public meeting – as the 2% was a compromise. Dr. Sacks’s answer was different. He said you can’t prove that 100% of the results are within limits, which is of course true but this is not a reason for setting such a goal. I made this point in a brief comment. I have also published the absurdity that goes with Dr. Sacks’s reasoning in that no one would specify a goal for 98% “right” site surgery (95% in the article since it dealt with an earlier standard) – see: Wrong thinking about glucose standards. Krouwer JS Clin Chem 2010;56:874-875. And since there are about 8 billion glucose meter results in the US each year, allowing 2% to be anywhere means that 160 million glucose results could potentially harm patients. Another way to say that 2% of a huge number is still a very big number.
January 18, 2014
I was made aware of a new FDA glucose guidance by Sten Westgard. Reading the guidance revealed that I’ve been vindicated. Here’s why.
I’ve been critical of ISO and CLSI glucose standards and have critiqued them in the literature (1-5). I’ve also advocated for better performance standards on CLSI subcommittees that I’ve led, such as EP21 (Total Error) and EP27 (Error Grids). But I was summarily kicked off those subcommittees.
Previously, FDA recommended that companies adhere to ISO glucose guidelines, but a sentence in the new FDA guidance is rather striking: “FDA believes that the criteria set forth in the ISO 15197 standard do not adequately protect patients using BGMS devices in professional settings, and does not recommend using these criteria for BGMS devices.”
As one reads what the FDA does recommend, some things that I have published appear in one form or another, such as:
- The FDA guidance requires error limits on 100% of the data. (ISO and CLSI allow a certain percentage of the data to have unspecified errors).
- User error should not be eliminated – For example, FDA says: “FDA recognizes that most study evaluations performed for pre-market submissions occur in idealized conditions, thereby potentially overestimating the total accuracy of the BGMS device, even when performed in the hands of the intended user. Nonetheless, it is important that you design your study to most accurately evaluate how the device will perform in the intended use population.”
- And to point #2 “Testing should be performed by the intended POC (point of care) user (e.g., nurses, nurse assistants, etc.) to accurately reflect device performance in POC settings; at least 9 operators should participate in each study (capillary, venous, and arterial).” Readers may remember my recommendation that user error should not be excluded in EP21 was vigorously objected to by some subcommittee members.
- Krouwer JS. Wrong thinking about glucose standards. Clin Chem, 2010;56:874-875.
- Krouwer JS and Cembrowski GS. A review of standards and statistics used to describe blood glucose monitor performance. Journal of Diabetes Science and Technology, 2010;4:75-83.
- Krouwer JS and Cembrowski GS. Towards more complete specifications for acceptable analytical performance – a plea for error grid analysis. Clinical Chemistry and Laboratory Medicine, 2011;49:1127-1130.
- Krouwer JS. Why specifications for allowable glucose meter errors should include 100% of the data. Clinical Chemistry and Laboratory Medicine, 2013;51:1543-1544.
- Krouwer JS. The new glucose standard, PCCT12-A3 misses the mark. Journal of Diabetes Science and Technology, 2013;7:1400-1402.