January 31, 2018
In clinical chemistry, one often hears that there are two contributions to error – systematic error and random error. Random error is often estimated by taking the SD of a set of observations of the same sample. But does the SD estimate random error? And are repeatability and reproducibility forms of random error? (Recall that repeatability = within run imprecision and reproducibility = long term (or total) imprecision.
Example 1 – An assay with linear drift with 10 observations run one after the other.
The SD of these 10 observations = 1.89. But if one sets up a regression with Y=drift + error, the error term is 0.81. Hence, the real random error is much less than the estimated SD random error because the observations are contaminated with a bias (namely drift). So here is a case where repeatability doesn’t measure random error by taking the SD, one has to investigate further.
Example 2 – An assay with calibration (drift) bias using the same figure as above (Ok I used the same numbers but this doesn’t matter).
Assume that in the above figure, each N is the average of a month of observations, corresponding to a calibration. Each subsequent month has a new calibration.
Clearly, the same argument applies. There is now calibration bias which inflates the apparent imprecision so once again, the real random error is much less than what one measures by taking the SD.
January 25, 2018
Anyone who has even briefly ventured into the realm of statistics has seen the standard setup. One states a hypothesis, plans a protocol, collects and analyzes data and finally concludes that the hypothesis is true or false.
Yet a typical lab medicine evaluation will state the importance of the assay, present data about precision, bias, and other parameters and then launch into a discussion.
What’s missing is the hypothesis, or in terms that we used in industry – the specifications. For example, assay A should have a CV of 5% or less in the range of XX to YY. After data analysis, the conclusion is that assay A met (or didn’t meet) the precision specification.
These specifications are rarely if ever present in evaluation publications. Try to find a specification the next time you read an evaluation paper. And without specifications, there are usually no meaningful conclusions.
January 17, 2018
Doing my infrequent journal scan, I came across the following paper – “The use of error and uncertainty methods in the medical laboratory” available here. Ok, another sentence floored me (it’s in the abstract)… “Performance specifications for diagnostic tests should include the diagnostic uncertainty of the entire testing process.” It’s a little hard to understand what “diagnostic uncertainty” means. The sentence would be clearer if it read Performance specifications for diagnostic tests should include the entire testing process. But isn’t this obvious? Does this need to be stated as a principle in 2018?
January 17, 2018
Doing my infrequent journal scan, I came across the following paper – “The use of error and uncertainty methods in the medical laboratory” available here. One sentence caught my eye… “Lately, efforts have been made to expand the TAE concept to the evaluation of results of patient samples, including all phases of the total testing process.” Here TAE refers to total analytical error.
- How can one expand TAE to include patient results? Aren’t patient results what it’s all about? What can TAE possibly mean if patient results are not included?
- If we now have the expanded version of TAE, what did we have before – when is total equal to total? Doesn’t total mean everything?
January 15, 2018
There has been some recent discussion about the differences between total error and measurement uncertainty, regarding which is better and which should be used. Rather than rehash the differences, let’s examine some similarities:
1. Both specifications are probability based.
2. Both are models
Being probability based is the bigger problem. If you specify limits for a high percentage of results (say 95% or 99%), then either 5% or 1% of results are unspecified. If all of the unspecified results caused problems this would be a disaster, when one considers how many tests are performed in a lab. There are instances of medical errors due to lab test error but these are (probably?) rare (meaning much less than 5% or 1%). But the point is probability based specifications cannot account for 100% of the results because the limits would include minus infinity to plus infinity.
The fact that both total error and measurement uncertainty are models is only a problem because the models are incorrect. Rather than rehash why, here’s a simple solution to both problems.
Add to the specification (either total error or measurement uncertainty) the requirement that zero results are allowed beyond a set of limits. To clarify, there are two sets of limits, an inner set to contain 95% or 99% of results and an outer set of limits for which no results should exceed.
Without this addition, one cannot claim that meeting either a total error or measurement uncertainty specification will guarantee quality of results, where quality means that the lab result will not lead to a medical error.