August 21, 2013
Cuthbert Daniel, was one of my statistical mentors. From his book (Daniel, C. Application of statistics to industrial experimentation, Wiley, New York, 1976. p5) is a valuable quote:
“the observations must be a fair (representative, random) sample of the population about which inferences are desired”
Simple enough, but when one views most assay evaluations, it is clear that the data does not meet the above quote. For example, glucose meter evaluations often have highly trained, rather than routine users and exclude samples that wouldn’t be routinely excluded. The logic for this is that the evaluations are following the ISO guideline. But bias – even though it is part of the ISO guideline – is still bias and the results will not reflect the actual errors that will be observed.
August 1, 2013
This year I split my time between AACC and AirVenture (airplane show – picture shows a pint sized jet) as the two events overlap. At AACC, I enjoyed meeting people I hadn’t seen for a while. At the Evaluation Protocols meeting, it seemed as if there were no young people – just different older people. During the meeting someone brought up the need for a document about outliers. No one seemed to know about the fairly recent (2001) attempt to produce such a document (EP20). For reasons that readers may understand, my participation at CLSI meetings is limited to listening. The reason EP20 went nowhere is that when you start talking about outliers; you will ultimately arrive at the topic of what are the clinical performance limits that demarcate an outlier from a garden variety error. There is no way that CLSI wants to produce such a list for assays even though it would be useful and even though they sort of have one for glucose meters – hence no EP20 document.