I had occasion to read about the suggestion that some fraction (often 50%) of biological variation should play a role in setting assay performance standards. This makes no sense to me. Here’s why.
The most fundamental measure of assay performance is diagnostic accuracy. That is – sensitivity the percentage of people tested whose assay value is above the cutoff and who have the disease and, and specificity the percentage of people tested whose assay value is below the cutoff and who do not have the disease.
Biological variation serves to decrease diagnostic accuracy. If a person who does not have the disease has a spike in the assay due to biological variation and this elevates the value beyond the cutoff, a false positive is the result. The more biological variation, the more the decrease in diagnostic accuracy. Analytical error does the same thing – the more error, the lower the observed diagnostic accuracy. From a diagnostic accuracy standpoint, there is no difference between biological variation and analytical error. Thus, it makes no sense that the performance of an assay should be allowed to reach 50% of the biological variation.
How should performance standards be set?
Use error grids to define limits where no results should occur (e.g., errors large enough to have high potential to cause patient harm). These limits are called limits of erroneous results (LER) by the FDA. These are the most important limits and are set using clinical judgment.
The area in an error grid to contain most of the results (often 95%) is less important and can be set using performance achieved by existing technology, with the caveat that considerations must be given to special circumstances such as cost, turn around time when it’s important, and so on.