Whining rewarded?

August 29, 2014

debate

Looking at the table of contents of Clinical Chemistry for September, there is a list of the most downloaded point / counterpoint articles and I am number one on this list for my discussion of GUM (The guide to the expression of uncertainty in measurement): http://www.clinchem.org/content/60/9/1245.full


Rejected papers

August 24, 2014

satisfaction

For those who have had some papers rejected over the years (like me), this post is worth reading … http://majesticforest.wordpress.com/2014/08/15/papers-that-triumphed-over-their-rejections/

 


AACC 2014 and glucose meters

July 31, 2014

glucose

There was a symposium about glucose meters with three outstanding talks. BTW, one nice feature of this year’s AACC meeting was that one could easily download each speaker’s presentation. The first talk by Dr. David Sacks reviewed the current glucose meter error grids:

the 2013 version of ISO 15197 for SMBG meters
the 2013 version of POCT12-A3 for hospital meters
the 2014 draft FDA guidance for SMBG meters
the 2014 draft FDA guidance for hospital meters

Dr. Sacks never mentioned that the 2014 draft FDA guidance for hospital meters says: don’t use the ISO standard – it does not adequately protect patients. Now, the FDA probably meant don’t use POCT12-A3, since that standard is for hospital meters, but the point is FDA is not happy with either the ISO or CLSI glucose meter standard, which is why they wrote their own.

After the talks, there was a question and answer session whereby Mitch Scott, the chair of the symposium, asked Dr. Sacks why the POCT12-A3 standard allows 2% of results to be unspecified (meters can have any values relative to reference). This is somewhat of a strange question since Dr. Scott was a member of the POCT12-A3 committee and previously answered this question himself in a public meeting – as the 2% was a compromise. Dr. Sacks’s answer was different. He said you can’t prove that 100% of the results are within limits, which is of course true but this is not a reason for setting such a goal. I made this point in a brief comment. I have also published the absurdity that goes with Dr. Sacks’s reasoning in that no one would specify a goal for 98% “right” site surgery (95% in the article since it dealt with an earlier standard) – see: Wrong thinking about glucose standards. Krouwer JS Clin Chem 2010;56:874-875. And since there are about 8 billion glucose meter results in the US each year, allowing 2% to be anywhere means that 160 million glucose results could potentially harm patients. Another way to say that 2% of a huge number is still a very big number.

 


Published: Acute Versus Chronic Injury in Error Grids

July 19, 2014

article

This is a Letter to the editor (1) based on a new revision to the glucose meter error grid (2). The gist of the Letter is as follows. The glucose meter error grid procedure involved surveying clinicians as to what glucose levels would prompt them to treat patients. But this deals with symptoms or acute injury. If a glucose meter met these limits, one might think all is well. But diabetes also involves complications from continued elevated glucose. The Letter proposed a different error grid is required for chronic injury.

References

  1. Krouwer JS and Cembrowski GS Acute Versus Chronic Injury in Error Grids J Diabetes Sci Technol. http://dst.sagepub.com/content/early/2014/07/16/1932296814543662 Note: subscription may be required.
  2. Klonoff DC, Lias C, Vigersky R, et al The surveillance error grid. J Diabetes Sci Technol. 2014;8:658-672.

Why I’m losing interest in clinical chemistry

June 1, 2014

tired

I had occasion to review a revision to the CLSI guidance document EP19. However, when I downloaded the document, out popped a revision to EP21 instead.

What I had pushed for in the revision of EP21 – before I got kicked off the subcommittee – is gone. EP21 is about total error, which in practice relates to the error that is allowed to occur in the experiment. If you have a bunch of regulatory affairs people on the subcommittee, you restrict the allowable error sources and things look great but don’t necessarily reflect reality.

In the glucose meter POCT world, the CLSI version of a glucose meter standard had the above and other limitations, about which I ranted. And then the FDA came out with its own draft guidance and said – fuhgetaboutit – meaning don’t use the CLSI standard, we’ve come out with our own standard. I talked about this before but note that the FDA draft guidance wants to see experiments performed in the hands of the intended users – unlike the revision of EP21 which goes out of its way to exclude this error source.

So it’s been so long fighting this battle, I have to say I’m losing interest.


Reviews

April 9, 2014

review

In the last 2 months, I’ve been asked to conduct 5 reviews, all for different journals, to determine if a manuscript should be accepted or not. I performed 4 of the reviews, and declined to review one manuscript because the title and abstract alerted to me the fact that I probably wouldn’t understand one word of the paper. Before that, there was a 4 month period with no review requests, so you never know when the requests will occur.

Performing these reviews is the other side of the coin – I’ve submitted many papers of my own and read reviews of my papers. I know how it feels to have my own paper severely criticized, so I try to be gentle in my reviews when I see something wrong, but on the other hand I never have a problem in pointing out problems.

The review request contains the title and abstract; if you agree to perform the review you get the full paper. Many papers require revision which often means another review, where I can see how the authors responded to my comments.


Why GUM will never be enough

April 7, 2014

gum

I occasionally come across articles that describe a method evaluation using GUM (Guide to the expression of Uncertainty in Measurement). These papers can be quite impressive with respect to the modeling that occurs. However, there is often a statement that relates the results to clinical acceptability. Here’s why there is a problem.

Clinically acceptability is usually not defined but often implied to be a method’s performance that will not cause patient harm due to assay error.

A GUM analysis usually specifies the location for 95% of the results. But if the analysis shows that the assay just meets limits, then 5% of the results will cause patient harm. Now according to GUM models, the 5% will be close to limits because the data are assumed to be Gaussian so this is a minor problem.

A bigger problem is that GUM analysis often ignores rare but large errors such as a rare interference or something more insidious such a user error that results in a large assay error. (Often GUM analyses don’t assess user error at all). These large errors, while rare, are associated with major harm or death.

The remedy is to conduct a FMEA or fault tree in addition to GUM to try to brainstorm how large errors could occur and whether mitigations are in place to reduce their likelihood. Unless risk analysis is added to GUM, talking about clinical acceptability is misleading.


Follow

Get every new post delivered to your Inbox.