An explanation of confidence intervals

A glucose assay has error and:

  • small errors may cause no harm
  • larger errors may cause minor harm
  • even larger errors may cause moderate harm
  • even larger errors may cause severe harm or death – for the purposes of this post, this amount of error will be called a failure

Assume one wishes to determine the likelihood of a failure. There are three ways to do this and all are recommended:

  • perform a FMEA (failure mode and effects analysis) on the assay process (including pre- and post- analytical steps) to determine the risk of failure and to provide mitigations to reduce risk
  • perform an evaluation comparing the results of the assay to reference
  • monitor the results of the assay (if available) for problems

In performing the method comparison, one has defined the magnitude of error that constitutes a failure. Each assay result is either a failure or not a failure.

The number of results assayed is N
The number of failures is x.
The percentage of failures – the failure rate is (x/N) x 100.

For two evaluations, one of sample size 10, and the other for sample size 1,000 each with an observed zero rate of failures, intuitively one has more confidence in the evaluation with the larger sample size. A confidence interval helps quantify things. It is important to remember that even though for both sample sizes the observed failure rate is zero, the true failure rate is unknown.

The following table shows the upper 95% confidence interval for a series of sample sizes each with zero observed failures.

 

 

 

N x Failure rate 95% CI Poss. fails in a million

100

0

0

3.6217%

36,217

500

0

0

0.7351%

7,351

1,000

0

0

0.3682%

3,682

10,000

0

0

0.0369%

369

50,000

0

0

0.0074%

74

100,000

0

0

0.0037%

37

 

The last column shows how many failures could occur in a million samples. The figure at the top of the post shows the length of the confidence intervals.

But it is important to note that the last column – the possible failures in a million samples is an upper bound (for 95% confidence). The actual number of failures in a million samples is unknown because the failure rate is unknown. The purpose of the confidence interval is to provide proof that no more than the indicated number of failures could occur.

Even with 100,000 samples, one has only proved that no more than 37 catastrophic events could occur in a million samples. Another way of stating this is it is hard to prove that rare events don’t happen.

And finally, the confidence intervals are valid only if a representative set of samples have been taken.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: