Diabetes Meeting Comments

May 22, 2013

diabetes

I just got back from a diabetes meeting where I was one of the speakers – http://diabetestechnology.org/bgm/.

One of the things I learned was that besides the glucose meters from the well-known manufacturers, there are many lesser known (and less expensive) meters which have a significant market share but perform poorly compared to the meters from the well-known manufacturers.

I also learned that that there are 32,000 adverse events about glucose meters reported to the FDA each year. At a 2010 FDA meeting, this number was 12,000. I doubt that anyone is examining this data.

I disagreed with one of the speakers who said regarding the fact that the ISO or CLSI standards do not specify the allowable error for 100% of the data – “it’s statistically impossible to have a goal for 100% of the data.” (Quote is approximate). This is wrong. There’s nothing wrong with having error goals for 100% of the data – it’s just that one cannot prove (statistically) that such goals will be met. The new ISO goal allows 1% of the samples to have any error (if 1% of the samples read 300 when truth was 30, this meter would be acceptable according to ISO). Actually, it is difficult to prove that 99% of the samples meet goals. For example the 95% confidence limit to prove that 1% of the samples do not exceed an error goal requires 0 failures in 368 tries. To satisfy things at 99% requires 525 samples. But… there are 8 billion glucose determinations per year in the US (assuming that people who take insulin test three times per day) so even if one proves that the failure rate is no more than 1%, there could still be eighty million dangerous results per year! No one would specify a rate of correct site surgery of 99% (allows a rate of wrong site surgery of 1%). NOTE: I didn’t bring these points up in the meeting since this speaker made his comment as an aside from his main talk.

I also had a frustrating encounter. During lunch, I sat across from someone who had written a paper about which I had a question. I asked my question and the author said; he never wrote what I had questioned. He then produced the paper – yes he travels with his publications – and said show me. After a while of looking, I showed him the paragraph and he said, he didn’t mean what was written.

My talk was about bias in clinical trials and the suggestion to use post market data to assess glucose meter performance.


QC (quality Control) is not quality

May 14, 2013

bad

Based on recent events, I’m restating that for a clinical assay, good quality control results do not imply good quality. Of course, good quality control results is a good thing and poor quality control results means that there are problems, but here are some examples where good quality control results don’t mean good quality.

  1. QC samples do not inform about patient sample interferences, which can cause large errors and result in patient harm. Such events could occur with perfect QC results.
  2. QC informs about biases that persist across time. For example if QC is performed twice per day, a bad calibration (where calibration lasts for a month) will likely be detected. But short term biases will likely be missed.

So if anyone claims, you can select your lab’s quality by running QC according to some scheme, it’s simply not true.