CLSI EP11 – Uniformity of claims – update

August 27, 2011


The CLSI document EP11 was originally published in 1995 as a proposed document. The document explains among other things how the same performance data can lead to different manufacturer’s claims. EP11 provides a terminology for claims so that users (labs) can understand the meaning of claims.

Even though EP11 was approved by its subcommittee and the area committee, the Board of Directors cancelled the project in 2003. Although supported by most manufacturers, there were a few who strongly opposed it and influenced the Board. It does not appear in the CLSI catalog. With new CLSI policy the area committee has the final say on documents, so I brought up revisiting EP11 at a recent CLSI meeting.

The feedback was virtually no support to bring back EP11 so it is a dead issue. One of the main comments was to let each EP document manage its own way of stating performance claims. I pointed out that the current way claims are stated is often poor. As one example in EP7-A2 (about interferences) it is stated:

“The following substances, when tested in serum at AST activities of 25 and 200 U/L according to the CLSI protocol, were found not to interfere at the concentrations indicated. A bias of less than 10% is not considered a significant interference.”…

This seems pretty bogus to me. It’s just bad science to say “found not to interfere”, if a substance does interfere (even if the bias is considered not to be significant). Moreover, since many of these documents have gone through several revisions and have not improved the way claims are stated, it seems like a brush off to say leave claims to each EP document.

But, perhaps there’s another way of looking at things. A few years ago, I proposed a method for a lab to compare the quality among its assays by calculating Cpm (process capability – a unitless measure) for all assays using existing quality control data (1). But one requirement for this to work is for each lab to have performance goals for each assay. When I asked a few lab directors what their performance goals were, I got back blank stares, followed eventually with, do you mean CLIA goals? Hence it would seem that most lab directors do not think in terms of performance goals and this may be more important than how performance claims are stated.


  1. Assay Development and Evaluation: A Manufacturer’s Perspective. Jan S. Krouwer, AACC Press, Washington DC, 2002 pp96-101.

An explanation of confidence intervals

August 20, 2011

A glucose assay has error and:

  • small errors may cause no harm
  • larger errors may cause minor harm
  • even larger errors may cause moderate harm
  • even larger errors may cause severe harm or death – for the purposes of this post, this amount of error will be called a failure

Assume one wishes to determine the likelihood of a failure. There are three ways to do this and all are recommended:

  • perform a FMEA (failure mode and effects analysis) on the assay process (including pre- and post- analytical steps) to determine the risk of failure and to provide mitigations to reduce risk
  • perform an evaluation comparing the results of the assay to reference
  • monitor the results of the assay (if available) for problems

In performing the method comparison, one has defined the magnitude of error that constitutes a failure. Each assay result is either a failure or not a failure.

The number of results assayed is N
The number of failures is x.
The percentage of failures – the failure rate is (x/N) x 100.

For two evaluations, one of sample size 10, and the other for sample size 1,000 each with an observed zero rate of failures, intuitively one has more confidence in the evaluation with the larger sample size. A confidence interval helps quantify things. It is important to remember that even though for both sample sizes the observed failure rate is zero, the true failure rate is unknown.

The following table shows the upper 95% confidence interval for a series of sample sizes each with zero observed failures.




N x Failure rate 95% CI Poss. fails in a million
































The last column shows how many failures could occur in a million samples. The figure at the top of the post shows the length of the confidence intervals.

But it is important to note that the last column – the possible failures in a million samples is an upper bound (for 95% confidence). The actual number of failures in a million samples is unknown because the failure rate is unknown. The purpose of the confidence interval is to provide proof that no more than the indicated number of failures could occur.

Even with 100,000 samples, one has only proved that no more than 37 catastrophic events could occur in a million samples. Another way of stating this is it is hard to prove that rare events don’t happen.

And finally, the confidence intervals are valid only if a representative set of samples have been taken.

30 Airports

August 17, 2011

I started flying about 3 years ago and recently landed at my 30thairport. The picture above (click to enlarge) shows the New England airports – the others are in Indiana. The chart below shows the type of aircraft I’ve been flying. Most are high wing and have “glass cockpits” (2 LCD panels instead of small round gauges). The Diamond DA-40 is low wing and the Robinson R44 is a helicopter. Since I received my private pilot certificate (2 years this coming December) I’ve been filming most of my flights, which can be seen here.

Glucose Meter Accuracy from a patient’s perspective

August 16, 2011

In industry, academia, and government, we pontificate a lot about how good glucose meter performance should be. Here are some patient’s experiences.