Maybe it’s finally time to talk about outliers

I’ve argued for some time that to properly evaluate an assay, one has to account for all of the results. The conventional wisdom is to specify limits for 95% of the results whereby it is implied that if the results are within limits, the assay is acceptable. Implied or explicitly part of the 95% requirements is the assumption that the data are normally distributed. This makes large errors extremely unlikely.

A recent paper to be published in May in Clinical Chemistry (subscription required) and available now is about troponin outliers. These authors found that for 4 methods, the outlier rate ranged from 0.06% to 0.44%. Thus, all of these assays would fly under the radar if they met requirements for 95% of results. To put these rates in perspective, for 1,000,000 results a rate of 0.06% is 600 outliers and 0.44% is 4,400 outliers. These outliers are of one type – irreproducible outliers in duplicate samples. Reproducible outliers due to an interfering substance are another type of outliers. So the rate of outliers due to all causes would be larger, although the way outliers were calculated in this study, assays that had very good precision were at a disadvantage.

The study included 2,391 samples, not the sample size that laboratories would like to run to qualify an assay. This is probably one reason that people don’t talk about outliers. To evaluate them by running samples requires too many samples. The most efficient way to evaluate the potential for outliers to perform risk analysis.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: